Estimation of the input parameters in the Feller neuronal model
Ditlevsen, Susanne; Lansky, Petr
2006-06-01
The stochastic Feller neuronal model is studied, and estimators of the model input parameters, depending on the firing regime of the process, are derived. Closed expressions for the first two moments of functionals of the first-passage time (FTP) through a constant boundary in the suprathreshold regime are derived, which are used to calculate moment estimators. In the subthreshold regime, the exponentiality of the FTP is utilized to characterize the input parameters. The methods are illustrated on simulated data. Finally, approximations of the first-passage-time moments are suggested, and biological interpretations and comparisons of the parameters in the Feller and the Ornstein-Uhlenbeck models are discussed.
Environmental Transport Input Parameters for the Biosphere Model
Energy Technology Data Exchange (ETDEWEB)
M. Wasiolek
2004-09-10
This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis
Inhalation Exposure Input Parameters for the Biosphere Model
Energy Technology Data Exchange (ETDEWEB)
K. Rautenstrauch
2004-09-10
This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.
Inhalation Exposure Input Parameters for the Biosphere Model
Energy Technology Data Exchange (ETDEWEB)
M. Wasiolek
2006-06-05
This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This
Determining avalanche modelling input parameters using terrestrial laser scanning technology
2013-01-01
International audience; In dynamic avalanche modelling, data about the volumes and areas of the snow released, mobilized and deposited are key input parameters, as well as the fracture height. The fracture height can sometimes be measured in the field, but it is often difficult to access the starting zone due to difficult or dangerous terrain and avalanche hazards. More complex is determining the areas and volumes of snow involved in an avalanche. Such calculations require high-resolution spa...
Land Building Models: Uncertainty in and Sensitivity to Input Parameters
2013-08-01
Louisiana Coastal Area Ecosystem Restoration Projects Study , Vol. 3, Final integrated ERDC/CHL CHETN-VI-44 August 2013 24 feasibility study and... Nourishment Module, Chapter 8. In Coastal Louisiana Ecosystem Assessment and Restoration (CLEAR) Model of Louisiana Coastal Area (LCA) Comprehensive...to Input Parameters by Ty V. Wamsley PURPOSE: The purpose of this Coastal and Hydraulics Engineering Technical Note (CHETN) is to document a
Agricultural and Environmental Input Parameters for the Biosphere Model
Energy Technology Data Exchange (ETDEWEB)
K. Rasmuson; K. Rautenstrauch
2004-09-14
This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.
Environmental Transport Input Parameters for the Biosphere Model
Energy Technology Data Exchange (ETDEWEB)
M. A. Wasiolek
2003-06-27
This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699
Soil-Related Input Parameters for the Biosphere Model
Energy Technology Data Exchange (ETDEWEB)
A. J. Smith
2004-09-09
This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure
Assigning probability distributions to input parameters of performance assessment models
Energy Technology Data Exchange (ETDEWEB)
Mishra, Srikanta [INTERA Inc., Austin, TX (United States)
2002-02-01
This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.
Investigations of the sensitivity of a coronal mass ejection model (ENLIL) to solar input parameters
DEFF Research Database (Denmark)
Falkenberg, Thea Vilstrup; Vršnak, B.; Taktakishvili, A.;
2010-01-01
investigate the parameter space of the ENLILv2.5b model using the CME event of 25 July 2004. ENLIL is a time‐dependent 3‐D MHD model that can simulate the propagation of cone‐shaped interplanetary coronal mass ejections (ICMEs) through the solar system. Excepting the cone parameters (radius, position...... (CMEs), but in order to predict the caused effects, we need to be able to model their propagation from their origin in the solar corona to the point of interest, e.g., Earth. Many such models exist, but to understand the models in detail we must understand the primary input parameters. Here we......, and initial velocity), all remaining parameters are varied, resulting in more than 20 runs investigated here. The output parameters considered are velocity, density, magnetic field strength, and temperature. We find that the largest effects on the model output are the input parameters of upper limit...
Estimating input parameters from intracellular recordings in the Feller neuronal model
Bibbona, Enrico; Lansky, Petr; Sirovich, Roberta
2010-03-01
We study the estimation of the input parameters in a Feller neuronal model from a trajectory of the membrane potential sampled at discrete times. These input parameters are identified with the drift and the infinitesimal variance of the underlying stochastic diffusion process with multiplicative noise. The state space of the process is restricted from below by an inaccessible boundary. Further, the model is characterized by the presence of an absorbing threshold, the first hitting of which determines the length of each trajectory and which constrains the state space from above. We compare, both in the presence and in the absence of the absorbing threshold, the efficiency of different known estimators. In addition, we propose an estimator for the drift term, which is proved to be more efficient than the others, at least in the explored range of the parameters. The presence of the threshold makes the estimates of the drift term biased, and two methods to correct it are proposed.
A New Ensemble of Perturbed-Input-Parameter Simulations by the Community Atmosphere Model
Energy Technology Data Exchange (ETDEWEB)
Covey, C; Brandon, S; Bremer, P T; Domyancis, D; Garaizar, X; Johannesson, G; Klein, R; Klein, S A; Lucas, D D; Tannahill, J; Zhang, Y
2011-10-27
Uncertainty quantification (UQ) is a fundamental challenge in the numerical simulation of Earth's weather and climate, and other complex systems. It entails much more than attaching defensible error bars to predictions: in particular it includes assessing low-probability but high-consequence events. To achieve these goals with models containing a large number of uncertain input parameters, structural uncertainties, etc., raw computational power is needed. An automated, self-adapting search of the possible model configurations is also useful. Our UQ initiative at the Lawrence Livermore National Laboratory has produced the most extensive set to date of simulations from the US Community Atmosphere Model. We are examining output from about 3,000 twelve-year climate simulations generated with a specialized UQ software framework, and assessing the model's accuracy as a function of 21 to 28 uncertain input parameter values. Most of the input parameters we vary are related to the boundary layer, clouds, and other sub-grid scale processes. Our simulations prescribe surface boundary conditions (sea surface temperatures and sea ice amounts) to match recent observations. Fully searching this 21+ dimensional space is impossible, but sensitivity and ranking algorithms can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination. Bayesian statistical constraints, employing a variety of climate observations as metrics, also seem promising. Observational constraints will be important in the next step of our project, which will compute sea surface temperatures and sea ice interactively, and will study climate change due to increasing atmospheric carbon dioxide.
Milella, Pamela; Bisantino, Tiziana; Gentile, Francesco; Iacobellis, Vito; Trisorio Liuzzi, Giuliana
2012-11-01
SummaryThe paper suggests a methodology, based on performance metrics, to select the optimal set of input and parameters to be used for the simulation of river flow discharges with a semi-distributed hydrologic model. The model is applied at daily scale in a semi-arid basin of Southern Italy (Carapelle river, basin area: 506 km2) for which rainfall and discharge series for the period 2006-2009 are available. A classification of inputs and parameters was made in two subsets: the former - spatially distributed - to be selected among different options, the latter - lumped - to be calibrated. Different data sources of (or methodologies to obtain) spatially distributed data have been explored for the first subset. In particular, the FAO Penman-Monteith, Hargreaves and Thornthwaite equations were tested for the evaluation of reference evapotranspiration that, in semi-arid areas, represents a key role in hydrological modeling. The availability of LAI maps from different remote sensing sources was exploited in order to enhance the characterization of the vegetation state and consequently of the spatio-temporal variation in actual evapotranspiration. Different type of pedotransfer functions were used to derive the soil hydraulic parameters of the area. For each configuration of the first subset of data, a manual calibration of the second subset of parameters was carried out. Both the manual calibration of the lumped parameters and the selection of the optimal distributed dataset were based on the calculation and the comparison of different performance metrics measuring the distance between observed and simulated discharge data series. Results not only show the best options for estimating reference evapotranspiration, crop coefficients, LAI values and hydraulic properties of soil, but also provide significant insights regarding the use of different performance metrics including traditional indexes such as RMSE, NSE, index of agreement, with the more recent Benchmark
Li, Shanzhi; Wang, Haoping; Aitouche, Abdel; Tian, Yang; Christov, Nicolai
2017-01-01
This paper proposes a robust unknown input observer for state estimation and fault detection using linear parameter varying model. Since the disturbance and actuator fault is mixed together in the physical system, it is difficult to isolate the fault from the disturbance. Using the state transforation, the estimation of the original state becomes to associate with the transform state. By solving the linear matrix inequalities (LMIs)and linear matrix equalities (LMEs), the parameters of the UIO can be obtained. The convergence of the UIO is also analysed by the Layapunov theory. Finally, a wind turbine system with disturbance and actuator fault is tested for the proposed method. From the simulations, it demonstrates the effectiveness and performances of the proposed method.
Modelling pesticide leaching under climate change: parameter vs. climate input uncertainty
Directory of Open Access Journals (Sweden)
K. Steffens
2013-08-01
Full Text Available The assessment of climate change impacts on the risk for pesticide leaching needs careful consideration of different sources of uncertainty. We investigated the uncertainty related to climate scenario input and its importance relative to parameter uncertainty of the pesticide leaching model. The pesticide fate model MACRO was calibrated against a comprehensive one-year field data set for a well-structured clay soil in south-west Sweden. We obtained an ensemble of 56 acceptable parameter sets that represented the parameter uncertainty. Nine different climate model projections of the regional climate model RCA3 were available as driven by different combinations of global climate models (GCM, greenhouse gas emission scenarios and initial states of the GCM. The future time series of weather data used to drive the MACRO-model were generated by scaling a reference climate data set (1970–1999 for an important agricultural production area in south-west Sweden based on monthly change factors for 2070–2099. 30 yr simulations were performed for different combinations of pesticide properties and application seasons. Our analysis showed that both the magnitude and the direction of predicted change in pesticide leaching from present to future depended strongly on the particular climate scenario. The effect of parameter uncertainty was of major importance for simulating absolute pesticide losses, whereas the climate uncertainty was relatively more important for predictions of changes of pesticide losses from present to future. The climate uncertainty should be accounted for by applying an ensemble of different climate scenarios. The aggregated ensemble prediction based on both acceptable parameterizations and different climate scenarios could provide robust probabilistic estimates of future pesticide losses and assessments of changes in pesticide leaching risks.
Modelling pesticide leaching under climate change: parameter vs. climate input uncertainty
Directory of Open Access Journals (Sweden)
K. Steffens
2014-02-01
Full Text Available Assessing climate change impacts on pesticide leaching requires careful consideration of different sources of uncertainty. We investigated the uncertainty related to climate scenario input and its importance relative to parameter uncertainty of the pesticide leaching model. The pesticide fate model MACRO was calibrated against a comprehensive one-year field data set for a well-structured clay soil in south-western Sweden. We obtained an ensemble of 56 acceptable parameter sets that represented the parameter uncertainty. Nine different climate model projections of the regional climate model RCA3 were available as driven by different combinations of global climate models (GCM, greenhouse gas emission scenarios and initial states of the GCM. The future time series of weather data used to drive the MACRO model were generated by scaling a reference climate data set (1970–1999 for an important agricultural production area in south-western Sweden based on monthly change factors for 2070–2099. 30 yr simulations were performed for different combinations of pesticide properties and application seasons. Our analysis showed that both the magnitude and the direction of predicted change in pesticide leaching from present to future depended strongly on the particular climate scenario. The effect of parameter uncertainty was of major importance for simulating absolute pesticide losses, whereas the climate uncertainty was relatively more important for predictions of changes of pesticide losses from present to future. The climate uncertainty should be accounted for by applying an ensemble of different climate scenarios. The aggregated ensemble prediction based on both acceptable parameterizations and different climate scenarios has the potential to provide robust probabilistic estimates of future pesticide losses.
Comparison of input parameters regarding rock mass in analytical solution and numerical modelling
Yasitli, N. E.
2016-12-01
Characteristics of stress redistribution around a tunnel excavated in rock are of prime importance for an efficient tunnelling operation and maintaining stability. As it is a well known fact that rock mass properties are the most important factors affecting stability together with in-situ stress field and tunnel geometry. Induced stresses and resultant deformation around a tunnel can be approximated by means of analytical solutions and application of numerical modelling. However, success of these methods depends on assumptions and input parameters which must be representative for the rock mass. However, mechanical properties of intact rock can be found by laboratory testing. The aim of this paper is to demonstrate the importance of proper representation of rock mass properties as input data for analytical solution and numerical modelling. For this purpose, intact rock data were converted into rock mass data by using the Hoek-Brown failure criterion and empirical relations. Stress-deformation analyses together with yield zone thickness determination have been carried out by using analytical solutions and numerical analyses by using FLAC3D programme. Analyses results have indicated that incomplete and incorrect design causes stability and economic problems in the tunnel. For this reason during the tunnel design analytical data and rock mass data should be used together. In addition, this study was carried out to prove theoretically that numerical modelling results should be applied to the tunnel design for the stability and for the economy of the support.
Evaluation of Uncertainty in Constituent Input Parameters for Modeling the Fate of RDX
2015-07-01
accurately estimated, such as solubility, while others — such as degradation rates — are often far more uncertain . Prior to using improved methods for...meet this purpose, a previous application of TREECS™ was used to evaluate parameter sensitivity and the effects of highly uncertain inputs for...than others. One of the most uncertain inputs in this application is the loading rate (grams/year) of unexploded RDX residue. A value of 1.5 kg/yr was
Energy Technology Data Exchange (ETDEWEB)
Ajami, N K; Duan, Q; Sorooshian, S
2006-05-05
This paper presents a new technique--Integrated Bayesian Uncertainty Estimator (IBUNE) to account for the major uncertainties of hydrologic rainfall-runoff predictions explicitly. The uncertainties from the input (forcing) data--mainly the precipitation observations and from the model parameters are reduced through a Monte Carlo Markov Chain (MCMC) scheme named Shuffled Complex Evolution Metropolis (SCEM) algorithm which has been extended to include a precipitation error model. Afterwards, the Bayesian Model Averaging (BMA) scheme is employed to further improve the prediction skill and uncertainty estimation using multiple model output. A series of case studies using three rainfall-runoff models to predict the streamflow in the Leaf River basin, Mississippi are used to examine the necessity and usefulness of this technique. The results suggests that ignoring either input forcings error or model structural uncertainty will lead to unrealistic model simulations and their associated uncertainty bounds which does not consistently capture and represent the real-world behavior of the watershed.
Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J
2011-09-01
When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel.
Better temperature predictions in geothermal modelling by improved quality of input parameters
DEFF Research Database (Denmark)
Fuchs, Sven; Bording, Thue Sylvester; Balling, N.
2015-01-01
Thermal modelling is used to examine the subsurface temperature field and geothermal conditions at various scales (e.g. sedimentary basins, deep crust) and in the framework of different problem settings (e.g. scientific or industrial use). In such models, knowledge of rock thermal properties...... region (model dimension: 135 x115 km, depth: 20 km). Results clearly show that (i) the use of location-specific well-log derived rock thermal properties and (ii) the consideration of laterally varying input data (reflecting changes of thermofacies in the project area) significantly improves...
Modelling the soil microclimate: does the spatial or temporal resolution of input parameters matter?
Directory of Open Access Journals (Sweden)
Anna Carter
2016-01-01
Full Text Available The urgency of predicting future impacts of environmental change on vulnerable populations is advancing the development of spatially explicit habitat models. Continental-scale climate and microclimate layers are now widely available. However, most terrestrial organisms exist within microclimate spaces that are very small, relative to the spatial resolution of those layers. We examined the effects of multi-resolution, multi-extent topographic and climate inputs on the accuracy of hourly soil temperature predictions for a small island generated at a very high spatial resolution (<1 m2 using the mechanistic microclimate model in NicheMapR. Achieving an accuracy comparable to lower-resolution, continental-scale microclimate layers (within about 2–3°C of observed values required the use of daily weather data as well as high resolution topographic layers (elevation, slope, aspect, horizon angles, while inclusion of site-specific soil properties did not markedly improve predictions. Our results suggest that large-extent microclimate layers may not provide accurate estimates of microclimate conditions when the spatial extent of a habitat or other area of interest is similar to or smaller than the spatial resolution of the layers themselves. Thus, effort in sourcing model inputs should be focused on obtaining high resolution terrain data, e.g., via LiDAR or photogrammetry, and local weather information rather than in situ sampling of microclimate characteristics.
Input Parameters for Models of Energetic Electrons Fluxes at the Geostationary Orbit
Institute of Scientific and Technical Information of China (English)
V. I. Degtyarev; G.V. Popov; B. S. Xue; S.E. Chudnenko
2005-01-01
The results of cross-correlation analysis between electrons fluxes (with energies of ＞ 0.6 MeV,＞ 2.0MeV and ＞ 4.0MeV), geomagnetic indices and solar wind parameters are shown in the paper. It is determined that the electron fluxes are controlled not only by the geomagnetic indices, but also by the solar wind parameters, and the solar wind velocity demonstrates the best relation with the electron fluxes.Numerical value of the relation efficiency of external parameters with the highly energetic electrons fluxes shows a periodicity. It is presented here the preliminary results of daily averaged electrons fluxes forecast for a day ahead on the basis of the model of neuron networks.
Saramago, Pedro; Manca, Andrea; Sutton, Alex J
2012-01-01
The evidence base informing economic evaluation models is rarely derived from a single source. Researchers are typically expected to identify and combine available data to inform the estimation of model parameters for a particular decision problem. The absence of clear guidelines on what data can be used and how to effectively synthesize this evidence base under different scenarios inevitably leads to different approaches being used by different modelers. The aim of this article is to produce a taxonomy that can help modelers identify the most appropriate methods to use when synthesizing the available data for a given model parameter. This article developed a taxonomy based on possible scenarios faced by the analyst when dealing with the available evidence. While mainly focusing on clinical effectiveness parameters, this article also discusses strategies relevant to other key input parameters in any economic model (i.e., disease natural history, resource use/costs, and preferences). The taxonomy categorizes the evidence base for health economic modeling according to whether 1) single or multiple data sources are available, 2) individual or aggregate data are available (or both), or 3) individual or multiple decision model parameters are to be estimated from the data. References to examples of the key methodological developments for each entry in the taxonomy together with citations to where such methods have been used in practice are provided throughout. The use of the taxonomy developed in this article hopes to improve the quality of the synthesis of evidence informing decision models by bringing to the attention of health economics modelers recent methodological developments in this field. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Mairani, A.; Magro, G.; Tessonnier, T.; Böhlen, T. T.; Molinelli, S.; Ferrari, A.; Parodi, K.; Debus, J.; Haberer, T.
2017-06-01
Models able to predict relative biological effectiveness (RBE) values are necessary for an accurate determination of the biological effect with proton and 4He ion beams. This is particularly important when including RBE calculations in treatment planning studies comparing biologically optimized proton and 4He ion beam plans. In this work, we have tailored the predictions of the modified microdosimetric kinetic model (MKM), which is clinically applied for carbon ion beam therapy in Japan, to reproduce RBE with proton and 4He ion beams. We have tuned the input parameters of the MKM, i.e. the domain and nucleus radii, reproducing an experimental database of initial RBE data for proton and He ion beams. The modified MKM, with the best fit parameters obtained, has been used to reproduce in vitro cell survival data in clinically-relevant scenarios. A satisfactory agreement has been found for the studied cell lines, A549 and RENCA, with the mean absolute survival variation between the data and predictions within 2% and 5% for proton and 4He ion beams, respectively. Moreover, a sensitivity study has been performed varying the domain and nucleus radii and the quadratic parameter of the photon response curve. The promising agreement found in this work for the studied clinical-like scenarios supports the usage of the modified MKM for treatment planning studies in proton and 4He ion beam therapy.
The sensitivity of conduit flow models to basic input parameters: there is no need for magma trolls!
Thomas, M. E.; Neuberg, J. W.
2012-04-01
Many conduit flow models now exist and some of these models are becoming extremely complicated, conducted in three dimensions and incorporating the physics of compressible three phase fluids (magmas), intricate conduit geometries and fragmentation processes, to name but a few examples. These highly specialised models are being used to explain observations of the natural system, and there is a danger that possible explanations may be getting needlessly complex. It is coherent, for instance, to propose the involvement of sub-surface dwelling magma trolls as an explanation for the change in a volcanoes eruptive style, but assuming the simplest explanation would prevent such additions, unless they were absolutely necessary. While the understanding of individual, often small scale conduit processes is increasing rapidly, is this level of detail necessary? How sensitive are these models to small changes in the most basic of governing parameters? Can these changes be used to explain observed behaviour? Here we will examine the sensitivity of conduit flow models to changes in the melt viscosity, one of the fundamental inputs to any such model. However, even addressing this elementary issue is not straight forward. There are several viscosity models in existence, how do they differ? Can models that use different viscosity models be realistically compared? Each of these viscosity models is also heavily dependent on the magma composition and/or temperature, and how well are these variables constrained? Magma temperatures and water contents are often assumed as "ball-park" figures, and are very rarely exactly known for the periods of observation the models are attempting to explain, yet they exhibit a strong controlling factor on the melt viscosity. The role of both these variables will be discussed. For example, using one of the available viscosity models a 20 K decrease in temperature of the melt results in a greater than 100% increase in the melt viscosity. With changes of
D'Ambrosio, Michele; Tofani, Veronica; Rossi, Guglielmo; Salvatici, Teresa; Tacconi Stefanelli, Carlo; Rosi, Ascanio; Benedetta Masi, Elena; Pazzi, Veronica; Vannocci, Pietro; Catani, Filippo; Casagli, Nicola
2017-04-01
The Aosta Valley region is located in North-West Alpine mountain chain. The geomorphology of the region is characterized by steep slopes, high climatic and altitude (ranging from 400 m a.s.l of Dora Baltea's river floodplain to 4810 m a.s.l. of Mont Blanc) variability. In the study area (zone B), located in Eastern part of Aosta Valley, heavy rainfall of about 800-900 mm per year is the main landslides trigger. These features lead to a high hydrogeological risk in all territory, as mass movements interest the 70% of the municipality areas (mainly shallow rapid landslides and rock falls). An in-depth study of the geotechnical and hydrological properties of hillslopes controlling shallow landslides formation was conducted, with the aim to improve the reliability of deterministic model, named HIRESS (HIgh REsolution Stability Simulator). In particular, two campaigns of on site measurements and laboratory experiments were performed. The data obtained have been studied in order to assess the relationships existing among the different parameters and the bedrock lithology. The analyzed soils in 12 survey points are mainly composed of sand and gravel, with highly variable contents of silt. The range of effective internal friction angle (from 25.6° to 34.3°) and effective cohesion (from 0 kPa to 9.3 kPa) measured and the median ks (10E-6 m/s) value are consistent with the average grain sizes (gravelly sand). The data collected contributes to generate input map of parameters for HIRESS (static data). More static data are: volume weight, residual water content, porosity and grain size index. In order to improve the original formulation of the model, the contribution of the root cohesion has been also taken into account based on the vegetation map and literature values. HIRESS is a physically based distributed slope stability simulator for analyzing shallow landslide triggering conditions in real time and in large areas using parallel computational techniques. The software
Distribution Development for STORM Ingestion Input Parameters
Energy Technology Data Exchange (ETDEWEB)
Fulton, John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-07-01
The Sandia-developed Transport of Radioactive Materials (STORM) code suite is used as part of the Radioisotope Power System Launch Safety (RPSLS) program to perform statistical modeling of the consequences due to release of radioactive material given a launch accident. As part of this modeling, STORM samples input parameters from probability distributions with some parameters treated as constants. This report described the work done to convert four of these constant inputs (Consumption Rate, Average Crop Yield, Cropland to Landuse Database Ratio, and Crop Uptake Factor) to sampled values. Consumption rate changed from a constant value of 557.68 kg / yr to a normal distribution with a mean of 102.96 kg / yr and a standard deviation of 2.65 kg / yr. Meanwhile, Average Crop Yield changed from a constant value of 3.783 kg edible / m ^{2} to a normal distribution with a mean of 3.23 kg edible / m ^{2} and a standard deviation of 0.442 kg edible / m ^{2} . The Cropland to Landuse Database ratio changed from a constant value of 0.0996 (9.96%) to a normal distribution with a mean value of 0.0312 (3.12%) and a standard deviation of 0.00292 (0.29%). Finally the crop uptake factor changed from a constant value of 6.37e^{-4} (Bq crop /kg)/(Bq soil /kg) to a lognormal distribution with a geometric mean value of 3.38e^{-4} (Bq crop /kg)/(Bq soil /kg) and a standard deviation value of 3.33 (Bq crop /kg)/(Bq soil /kg)
DEFF Research Database (Denmark)
2014-01-01
The present invention proposes methods, devices and computer program products. To this extent, there is defined a set X including N distinct parameter values x_i for at least one input parameter x, N being an integer greater than or equal to 1, first measured the physical quantity Pm1 for each...... based on the Vandermonde matrix and the first measured physical quantity according to the equation W=(VMT*VM)-1*VMT*Pm1. The model is iteratively refined so as to obtained a desired emulation precision.; The model can later be used to emulate the physical quantity based on input parameters or logs taken...
DEFF Research Database (Denmark)
Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo
2015-01-01
) different levels of network congestion. The choice of the probability distributions shows a low impact on the model output uncertainty, quantified in terms of coefficient of variation. Instead, with respect to the choice of different assignment algorithms, the link flow uncertainty, expressed in terms...... of coefficient of variation, resulting from stochastic user equilibrium and user equilibrium is, respectively, of 0.425 and 0.468. Finally, network congestion does not show a high effect on model output uncertainty at the network level. However, the final uncertainty of links with higher volume/capacity ratio...
Energy Technology Data Exchange (ETDEWEB)
Leng, Guoyong; Huang, Maoyi; Tang, Qiuhong; Sacks, William J.; Lei, Huimin; Leung, Lai-Yung R.
2013-09-16
Previous studies on irrigation impacts on land surface fluxes/states were mainly conducted as sensitivity experiments, with limited analysis of uncertainties from the input data and model irrigation schemes used. In this study, we calibrated and evaluated the performance of irrigation water use simulated by the Community Land Model version 4 (CLM4) against observations from agriculture census. We investigated the impacts of irrigation on land surface fluxes and states over the conterminous United States (CONUS) and explored possible directions of improvement. Specifically, we found large uncertainty in the irrigation area data from two widely used sources and CLM4 tended to produce unrealistically large temporal variations of irrigation demand for applications at the water resources region scale over CONUS. At seasonal to interannual time scales, the effects of irrigation on surface energy partitioning appeared to be large and persistent, and more pronounced in dry than wet years. Even with model calibration to yield overall good agreement with the irrigation amounts from the National Agricultural Statistics Service (NASS), differences between the two irrigation area datasets still dominate the differences in the interannual variability of land surface response to irrigation. Our results suggest that irrigation amount simulated by CLM4 can be improved by (1) calibrating model parameter values to account for regional differences in irrigation demand and (2) accurate representation of the spatial distribution and intensity of irrigated areas.
Adaptive distributed parameter and input estimation in linear parabolic PDEs
Mechhoud, Sarra
2016-01-01
In this paper, we discuss the on-line estimation of distributed source term, diffusion, and reaction coefficients of a linear parabolic partial differential equation using both distributed and interior-point measurements. First, new sufficient identifiability conditions of the input and the parameter simultaneous estimation are stated. Then, by means of Lyapunov-based design, an adaptive estimator is derived in the infinite-dimensional framework. It consists of a state observer and gradient-based parameter and input adaptation laws. The parameter convergence depends on the plant signal richness assumption, whereas the state convergence is established using a Lyapunov approach. The results of the paper are illustrated by simulation on tokamak plasma heat transport model using simulated data.
Control rod drive WWER 1000 – tuning of input parameters
Markov P.; Valtr O.
2007-01-01
The article picks up on the contributions presented at the conferences Computational Mechanics 2005 and 2006, in which a calculational model of an upgraded control rod linear stepping drive for the reactors WWER 1000 (LKP-M/3) was described and results of analysis of dynamical response of its individual parts when moving up- and downwards were included. The contribution deals with the tuning of input parameters of the 3rd generation drive with the objective of reaching its running as smooth a...
Cui, Yunfeng; Bai, Jing
2005-01-01
Liver kinetic study of [18F]2-fluoro-2-deoxy-D-glucose (FDG) metabolism in human body is an important tool for functional modeling and glucose metabolic rate estimation. In general, the arterial blood time-activity curve (TAC) and the tissue TAC are required as the input and output functions for the kinetic model. For liver study, however, the arterial-input may be not consistent with the actual model input because the liver has a dual blood supply from the hepatic artery (HA) and the portal vein (PV) to the liver. In this study, the result of model parameter estimation using dual-input function is compared with that using arterial-input function. First, a dynamic positron emission tomography (PET) experiment is performed after injection of FDG into the human body. The TACs of aortic blood, PV blood, and five regions of interest (ROIs) in liver are obtained from the PET image. Then, the dual-input curve is generated by calculating weighted sum of both the arterial and PV input curves. Finally, the five liver ROIs' kinetic parameters are estimated with arterial-input and dual-input functions respectively. The results indicate that the two methods provide different parameter estimations and the dual-input function may lead to more accurate parameter estimation.
DEFF Research Database (Denmark)
2014-01-01
The present invention proposes methods, devices and computer program products. To this extent, there is defined a set X including N distinct parameter values x_i for at least one input parameter x, N being an integer greater than or equal to 1, first measured the physical quantity Pm1 for each...
Control rod drive WWER 1000 – tuning of input parameters
Directory of Open Access Journals (Sweden)
Markov P.
2007-10-01
Full Text Available The article picks up on the contributions presented at the conferences Computational Mechanics 2005 and 2006, in which a calculational model of an upgraded control rod linear stepping drive for the reactors WWER 1000 (LKP-M/3 was described and results of analysis of dynamical response of its individual parts when moving up- and downwards were included. The contribution deals with the tuning of input parameters of the 3rd generation drive with the objective of reaching its running as smooth as possible so as to get a minimum wear of its parts as a result and hence to achieve maximum life-time.
Beitlerová, Hana; Hieke, Falk; Žížala, Daniel; Kapička, Jiří; Keiser, Andreas; Schmidt, Jürgen; Schindewolf, Marcus
2017-04-01
Process-based erosion modelling is a developing and adequate tool to assess, simulate and understand the complex mechanisms of soil loss due to surface runoff. While the current state of available models includes powerful approaches, a major drawback is given by complex parametrization. A major input parameter for the physically based soil loss and deposition model EROSION 3D is represented by soil texture. However, as the model has been developed in Germany it is dependent on the German soil classification. To exploit data generated during a massive nationwide soil survey campaign taking place in the 1960s across the entire Czech Republic, a transfer from the Czech to the German or at least international (e.g. WRB) system is mandatory. During the survey the internal differentiation of grain sizes was realized in a two fractions approach, separating texture into solely above and below 0.01 mm rather than into clayey, silty and sandy textures. Consequently, the Czech system applies a classification of seven different textures based on the respective percentage of large and small particles, while in Germany 31 groups are essential. The followed approach of matching Czech soil survey data to the German system focusses on semi-logarithmic interpolation of the cumulative soil texture curve additionally on a regression equation based on a recent database of 128 soil pits. Furthermore, for each of the seven Czech texture classes a group of typically suitable classes of the German system was derived. A GIS-based spatial analysis to test approaches of interpolation the soil texture was carried out. First results show promising matches and pave the way to a Czech model application of EROSION 3D.
Optimal Input Design for Aircraft Parameter Estimation using Dynamic Programming Principles
Morelli, Eugene A.; Klein, Vladislav
1990-01-01
A new technique was developed for designing optimal flight test inputs for aircraft parameter estimation experiments. The principles of dynamic programming were used for the design in the time domain. This approach made it possible to include realistic practical constraints on the input and output variables. A description of the new approach is presented, followed by an example for a multiple input linear model describing the lateral dynamics of a fighter aircraft. The optimal input designs produced by the new technique demonstrated improved quality and expanded capability relative to the conventional multiple input design method.
Improved insensitive to input parameters trajectory clustering algorithm
Institute of Scientific and Technical Information of China (English)
Jiashun Chen; Dechang Pi
2013-01-01
The existing trajectory clustering (TRACLUS) is sensi-tive to the input parameters ε and MinLns. The parameter value is changed a little, but cluster results are entirely different. Aiming at this vulnerability, a shielding parameters sensitivity trajectory cluster (SPSTC) algorithm is proposed which is insensitive to the input parameters. Firstly, some definitions about the core distance and reachable distance of line segment are presented, and then the algorithm generates cluster sorting according to the core dis-tance and reachable distance. Secondly, the reachable plots of line segment sets are constructed according to the cluster sor-ting and reachable distance. Thirdly, a parameterized sequence is extracted according to the reachable plot, and then the final trajec-tory cluster based on the parameterized sequence is acquired. The parameterized sequence represents the inner cluster structure of trajectory data. Experiments on real data sets and test data sets show that the SPSTC algorithm effectively reduces the sensitivity to the input parameters, meanwhile it can obtain the better quality of the trajectory cluster.
Model based optimization of EMC input filters
Energy Technology Data Exchange (ETDEWEB)
Raggl, K; Kolar, J. W. [Swiss Federal Institute of Technology, Power Electronic Systems Laboratory, Zuerich (Switzerland); Nussbaumer, T. [Levitronix GmbH, Zuerich (Switzerland)
2008-07-01
Input filters of power converters for compliance with regulatory electromagnetic compatibility (EMC) standards are often over-dimensioned in practice due to a non-optimal selection of number of filter stages and/or the lack of solid volumetric models of the inductor cores. This paper presents a systematic filter design approach based on a specific filter attenuation requirement and volumetric component parameters. It is shown that a minimal volume can be found for a certain optimal number of filter stages for both the differential mode (DM) and common mode (CM) filter. The considerations are carried out exemplarily for an EMC input filter of a single phase power converter for the power levels of 100 W, 300 W, and 500 W. (author)
Robust input design for nonlinear dynamic modeling of AUV.
Nouri, Nowrouz Mohammad; Valadi, Mehrdad
2017-09-01
Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Visualization of Input Parameters for Stream and Pathline Seeding
Directory of Open Access Journals (Sweden)
Tony McLoughlin
2015-04-01
Full Text Available Uncertainty arises in all stages of the visualization pipeline. However, the majority of flow visualization applications convey no uncertainty information to the user. In tools where uncertainty is conveyed, the focus is generally on data, such as error that stems from numerical methods used to generate a simulation or on uncertainty associated with mapping visualiza-tion primitives to data. Our work is aimed at another source of uncertainty - that associated with user-controlled input param-eters. The navigation and stability analysis of user-parameters has received increasing attention recently. This work presents an investigation of this topic for flow visualization, specifically for three-dimensional streamline and pathline seeding. From a dynamical systems point of view, seeding can be formulated as a predictability problem based on an initial condition. Small perturbations in the initial value may result in large changes in the streamline in regions of high unpredictability. Analyzing this predictability quantifies the perturbation a trajectory is subjugated to by the flow. In other words, some predictions are less certain than others as a function of initial conditions. We introduce novel techniques to visualize important user input parameters such as streamline and pathline seeding position in both space and time, seeding rake position and orientation, and inter-seed spacing. The implementation is based on a metric which quantifies similarity between stream and pathlines. This is important for Computational Fluid Dynamics (CFD engineers as, even with the variety of seeding strategies available, manual seeding using a rake is ubiquitous. We present methods to quantify and visualize the effects that changes in user-controlled input parameters have on the resulting stream and pathlines. We also present various visualizations to help CFD scientists to intuitively and effectively navigate this parameter space. The reaction from a domain
Soft Sensor for Inputs and Parameters Using Nonlinear Singular State Observer in Chemical Processes
Institute of Scientific and Technical Information of China (English)
许锋; 汪晔晔; 罗雄麟
2013-01-01
Chemical processes are usually nonlinear singular systems. In this study, a soft sensor using nonlinear singular state observer is established for unknown inputs and uncertain model parameters in chemical processes, which are augmented as state variables. Based on the observability of the singular system, this paper presents a simplified observability criterion under certain conditions for unknown inputs and uncertain model parameters. When the observability is satisfied, the unknown inputs and the uncertain model parameters are estimated online by the soft sensor using augmented nonlinear singular state observer. The riser reactor of fluid catalytic cracking unit is used as an example for analysis and simulation. With the catalyst circulation rate as the only unknown input without model error, one temperature sensor at the riser reactor outlet will ensure the correct estimation for the catalyst cir-culation rate. However, when uncertain model parameters also exist, additional temperature sensors must be used to ensure correct estimation for unknown inputs and uncertain model parameters of chemical processes.
Datta, S.; Jones, W. L.; Ebrahimi, H.; Chen, R.; Payne, V.; Kroodsma, R.
2014-12-01
The first step in radiometric inter-calibration is to ascertain the self-consistency and reasonableness of the observed brightness temperature (Tb) for each individual sensor involved. One of the widely used approaches is to compare the observed Tb with a simulated Tb using a forward radiative transfer model (RTM) and input geophysical parameters at the geographic location and time of the observation. In this study we intend to test the sensitivity of the RTM to uncertainties in the input geophysical parameters as well as to the underlying physical assumptions of gaseous absorption and surface emission in the RTM. SAPHIR, a cross track scanner onboard Indo-French Megha-Tropique Satellite, gives us a unique opportunity of studying 6 dual band 183 GHz channels at an inclined orbit over the Tropics for the first time. We will also perform the same sensitivity analysis using the Advance Technology Microwave Sounder (ATMS) 23 GHz and five 183 GHz channels. Preliminary analysis comparing GDAS and an independent retrieved profile show some sensitivity of the RTM to the input data. An extended analysis of this work using different input geophysical parameters will be presented. Two different absorption models, the Rosenkranz and the MonoRTM will be tested to analyze the sensitivity of the RTM to spectroscopic assumptions in each model. Also for the 23.8 GHz channel, the sensitivity of the RTM to the surface emissivity model will be checked. Finally the impact of these sensitivities on radiometric inter-calibration of radiometers at sounding frequencies will be assessed.
Klimenko, Maxim; Klimenko, Vladimir; Ratovsky, Konstantin; Goncharenko, Larisa
Earlier by Klimenko et al., 2009 under carrying out the calculations of the ionospheric effects of storm sequence on September 9-14, 2005 the model input parameters (potential difference through polar caps, field-aligned currents of the second region and particle precipitation fluxes and energy) were set as function of Kp-index of geomagnetic activity. The analyses of obtained results show that the reasons of quantitative distinctions of calculation results and observations can be: the use of 3 hour Kp-index at the setting of time dependence of model input parameters; the dipole approach of geomagnetic field; the absence in model calculations the effects of the solar flares, which were taken place during the considered period. In the given study the model input parameters were set as function of AE-and Kp-indices of geomagnetic activity according to different empirical models and morphological representations Feshchenko and Maltsev, 2003; Cheng et al., 2008; Zhang and Paxton, 2008. At that, we taken into account the shift of field-aligned currents of the second region to the lower latitudes as by Sojka et al., 1994 and 30 min. time delay of variations of the field-aligned currents of second region relative to the variations of the potential difference through polar caps at the storm sudden commencement phase. Also we taken into account the ionospheric effects of solar flares. Calculation of ionospheric effects of storm sequence has been carried out with use of the Global Self-Consistent Model of the Thermosphere, Ionosphere and Protonosphere (GSM TIP) developed in WD IZMIRAN (Nam-galadze et al., 1988). We carried out the comparison of calculation results with experimental data. This study is supported by RFBR grant 08-05-00274. References Cheng Z.W., Shi J.K., Zhang T.L., Dunlop M. and Liu Z.X. Relationship between FAC at plasma sheet boundary layers and AE index during storms from August to October, 2001. Sci. China Ser. E-Tech. Sci., 2008, Vol. 51, No. 7, 842
A Distortion Input Parameter in Image Denoising Algorithms with Wavelets
Directory of Open Access Journals (Sweden)
Anisia GOGU
2009-07-01
Full Text Available The problem of image denoising based on wavelets is considered. The paper proposes an image denoising method by imposing a distortion input parameter instead of threshold. The method has two algorithms. The first one is running off line and it is applied to the prototype of the image class and it building a specific dependency, linear or nonlinear, between the final desired distortion and the necessary probability of the details coefficients. The next algorithm, is directly applying the denoising with a threshold computed from the previous step. The threshold is estimated by using the probability density function of the details coefficients and by imposing the probability of the coefficients which will be kept. The obtained results are at the same quality level with other well known methods.
A new algorithm for importance analysis of the inputs with distribution parameter uncertainty
Li, Luyi; Lu, Zhenzhou
2016-10-01
Importance analysis is aimed at finding the contributions by the inputs to the uncertainty in a model output. For structural systems involving inputs with distribution parameter uncertainty, the contributions by the inputs to the output uncertainty are governed by both the variability and parameter uncertainty in their probability distributions. A natural and consistent way to arrive at importance analysis results in such cases would be a three-loop nested Monte Carlo (MC) sampling strategy, in which the parameters are sampled in the outer loop and the inputs are sampled in the inner nested double-loop. However, the computational effort of this procedure is often prohibitive for engineering problem. This paper, therefore, proposes a newly efficient algorithm for importance analysis of the inputs in the presence of parameter uncertainty. By introducing a 'surrogate sampling probability density function (SS-PDF)' and incorporating the single-loop MC theory into the computation, the proposed algorithm can reduce the original three-loop nested MC computation into a single-loop one in terms of model evaluation, which requires substantially less computational effort. Methods for choosing proper SS-PDF are also discussed in the paper. The efficiency and robustness of the proposed algorithm have been demonstrated by results of several examples.
Capote, R.; Herman, M.; Obložinský, P.; Young, P. G.; Goriely, S.; Belgya, T.; Ignatyuk, A. V.; Koning, A. J.; Hilaire, S.; Plujko, V. A.; Avrigeanu, M.; Bersillon, O.; Chadwick, M. B.; Fukahori, T.; Ge, Zhigang; Han, Yinlu; Kailas, S.; Kopecky, J.; Maslov, V. M.; Reffo, G.; Sin, M.; Soukhovitskii, E. Sh.; Talou, P.
2009-12-01
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and γ-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains
Energy Technology Data Exchange (ETDEWEB)
Ibsen, Lars Bo; Liingaard, M.
2006-12-15
A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. In this technical report the steps of establishing a lumped-parameter model are presented. Following sections are included in this report: Static and dynamic formulation, Simple lumped-parameter models and Advanced lumped-parameter models. (au)
Treatments of Precipitation Inputs to Hydrologic Models
Hydrological models are used to assess many water resources problems from agricultural use and water quality to engineering issues. The success of these models are dependent on correct parameterization; the most sensitive being the rainfall input time series. These records can come from land-based ...
Energy Technology Data Exchange (ETDEWEB)
Sprung, J.L.; Jow, H-N (Sandia National Labs., Albuquerque, NM (USA)); Rollstin, J.A. (GRAM, Inc., Albuquerque, NM (USA)); Helton, J.C. (Arizona State Univ., Tempe, AZ (USA))
1990-12-01
Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric and biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs.
Approximate input physics for stellar modelling
Pols, O R; Eggleton, P P; Han, Z; Pols, O R; Tout, C A; Eggleton, P P; Han, Z
1995-01-01
We present a simple and efficient, yet reasonably accurate, equation of state, which at the moderately low temperatures and high densities found in the interiors of stars less massive than the Sun is substantially more accurate than its predecessor by Eggleton, Faulkner & Flannery. Along with the most recently available values in tabular form of opacities, neutrino loss rates, and nuclear reaction rates for a selection of the most important reactions, this provides a convenient package of input physics for stellar modelling. We briefly discuss a few results obtained with the updated stellar evolution code.
DEFF Research Database (Denmark)
Ibsen, Lars Bo; Liingaard, Morten
A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. The lumped-parameter model development have been reported by (Wolf 1991b; Wolf 1991a; Wolf and Paronesso 1991; Wolf and Paronesso 19...
Antenna Correlation From Input Parameters for Arbitrary Topologies and Terminations
DEFF Research Database (Denmark)
Alrabadi, Osama; Andersen, Jørgen Bach; Pedersen, Gert Frølund
2012-01-01
The spatial correlation between pairs of antennas in a system comprised of N RF ports is found by extending the N × N scattering matrix to (N + 1)×(N + 1) spatial scattering matrix, where the extra space dimension accounts for the reference port patterns. The lossless property of the spatial scat...... scattering matrix in a 3D uniform field is employed for expressing the spatial correlation between the port patterns at arbitrary complex terminations merely from the reference scattering parameters and the complex terminations without any far-field calculation....
Input Parameters Optimization in Swarm DS-CDMA Multiuser Detectors
Abrão, Taufik; Angelico, Bruno A; Jeszensky, Paul Jean E
2010-01-01
In this paper, the uplink direct sequence code division multiple access (DS-CDMA) multiuser detection problem (MuD) is studied into heuristic perspective, named particle swarm optimization (PSO). Regarding different system improvements for future technologies, such as high-order modulation and diversity exploitation, a complete parameter optimization procedure for the PSO applied to MuD problem is provided, which represents the major contribution of this paper. Furthermore, the performance of the PSO-MuD is briefly analyzed via Monte-Carlo simulations. Simulation results show that, after convergence, the performance reached by the PSO-MuD is much better than the conventional detector, and somewhat close to the single user bound (SuB). Rayleigh flat channel is initially considered, but the results are further extend to diversity (time and spatial) channels.
Influence of the input parameters on the efficiency of plaster sanding with alundum abrasive discs
Krajcarz, D.; Spadło, S.; Młynarczyk, P.
2017-02-01
The paper presents test results concerning the relationship between selected input parameters and the process efficiency for the sanding of plaster surfaces with alundum abrasive discs. The input parameters under study were the size of the abrasive grains, the force exerted by the plaster sample pressing against the abrasive disc and the no-load rotational speed of the abrasive disc. The experimental data illustrating the relationship between the process efficiency and the particular input parameters were used to select the optimum plaster sanding conditions.
Input modeling with phase-type distributions and Markov models theory and applications
Buchholz, Peter; Felko, Iryna
2014-01-01
Containing a summary of several recent results on Markov-based input modeling in a coherent notation, this book introduces and compares algorithms for parameter fitting and gives an overview of available software tools in the area. Due to progress made in recent years with respect to new algorithms to generate PH distributions and Markovian arrival processes from measured data, the models outlined are useful alternatives to other distributions or stochastic processes used for input modeling. Graduate students and researchers in applied probability, operations research and computer science along with practitioners using simulation or analytical models for performance analysis and capacity planning will find the unified notation and up-to-date results presented useful. Input modeling is the key step in model based system analysis to adequately describe the load of a system using stochastic models. The goal of input modeling is to find a stochastic model to describe a sequence of measurements from a real system...
Zhang, Xuesong
2011-11-01
Estimating uncertainty of hydrologic forecasting is valuable to water resources and other relevant decision making processes. Recently, Bayesian Neural Networks (BNNs) have been proved powerful tools for quantifying uncertainty of streamflow forecasting. In this study, we propose a Markov Chain Monte Carlo (MCMC) framework (BNN-PIS) to incorporate the uncertainties associated with parameters, inputs, and structures into BNNs. This framework allows the structure of the neural networks to change by removing or adding connections between neurons and enables scaling of input data by using rainfall multipliers. The results show that the new BNNs outperform BNNs that only consider uncertainties associated with parameters and model structures. Critical evaluation of posterior distribution of neural network weights, number of effective connections, rainfall multipliers, and hyper-parameters shows that the assumptions held in our BNNs are not well supported. Further understanding of characteristics of and interactions among different uncertainty sources is expected to enhance the application of neural networks for uncertainty analysis of hydrologic forecasting. © 2011 Elsevier B.V.
Effects of input uncertainty on cross-scale crop modeling
Waha, Katharina; Huth, Neil; Carberry, Peter
2014-05-01
The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input
Response model parameter linking
Barrett, Michelle Derbenwick
2015-01-01
With a few exceptions, the problem of linking item response model parameters from different item calibrations has been conceptualized as an instance of the problem of equating observed scores on different test forms. This thesis argues, however, that the use of item response models does not require
Input modelling for subchannel analysis of CANFLEX fuel bundle
Energy Technology Data Exchange (ETDEWEB)
Park, Joo Hwan; Jun, Ji Su; Suk, Ho Chun [Korea Atomic Energy Research Institute, Taejon (Korea)
1998-06-01
This report describs the input modelling for subchannel analysis of CANFLEX fuel bundle using CASS(Candu thermalhydraulic Analysis by Subchannel approacheS) code which has been developed for subchannel analysis of CANDU fuel channel. CASS code can give the different calculation results according to users' input modelling. Hence, the objective of this report provide the background information of input modelling, the accuracy of input data and gives the confidence of calculation results. (author). 11 refs., 3 figs., 4 tabs.
Sensitivity analysis of a sound absorption model with correlated inputs
Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.
2017-04-01
Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.
Wind Farm Decentralized Dynamic Modeling With Parameters
DEFF Research Database (Denmark)
Soltani, Mohsen; Shakeri, Sayyed Mojtaba; Grunnet, Jacob Deleuran;
2010-01-01
Development of dynamic wind flow models for wind farms is part of the research in European research FP7 project AEOLUS. The objective of this report is to provide decentralized dynamic wind flow models with parameters. The report presents a structure for decentralized flow models with inputs from...
REFLECTIONS ON THE INOPERABILITY INPUT-OUTPUT MODEL
Dietzenbacher, Erik; Miller, Ronald E.
2015-01-01
We argue that the inoperability input-output model is a straightforward - albeit potentially very relevant - application of the standard input-output model. In addition, we propose two less standard input-output approaches as alternatives to take into consideration when analyzing the effects of disa
Lorite, I. J.; Mateos, L.; Fereres, E.
2005-01-01
SummaryThe simulations of dynamic, spatially distributed non-linear models are impacted by the degree of spatial and temporal aggregation of their input parameters and variables. This paper deals with the impact of these aggregations on the assessment of irrigation scheme performance by simulating water use and crop yield. The analysis was carried out on a 7000 ha irrigation scheme located in Southern Spain. Four irrigation seasons differing in rainfall patterns were simulated (from 1996/1997 to 1999/2000) with the actual soil parameters and with hypothetical soil parameters representing wider ranges of soil variability. Three spatial aggregation levels were considered: (I) individual parcels (about 800), (II) command areas (83) and (III) the whole irrigation scheme. Equally, five temporal aggregation levels were defined: daily, weekly, monthly, quarterly and annually. The results showed little impact of spatial aggregation in the predictions of irrigation requirements and of crop yield for the scheme. The impact of aggregation was greater in rainy years, for deep-rooted crops (sunflower) and in scenarios with heterogeneous soils. The highest impact on irrigation requirement estimations was in the scenario of most heterogeneous soil and in 1999/2000, a year with frequent rainfall during the irrigation season: difference of 7% between aggregation levels I and III was found. Equally, it was found that temporal aggregation had only significant impact on irrigation requirements predictions for time steps longer than 4 months. In general, simulated annual irrigation requirements decreased as the time step increased. The impact was greater in rainy years (specially with abundant and concentrated rain events) and in crops which cycles coincide in part with the rainy season (garlic, winter cereals and olive). It is concluded that in this case, average, representative values for the main inputs of the model (crop, soil properties and sowing dates) can generate results
Distributed Parameter Modelling Applications
DEFF Research Database (Denmark)
2011-01-01
Here the issue of distributed parameter models is addressed. Spatial variations as well as time are considered important. Several applications for both steady state and dynamic applications are given. These relate to the processing of oil shale, the granulation of industrial fertilizers and the d......Here the issue of distributed parameter models is addressed. Spatial variations as well as time are considered important. Several applications for both steady state and dynamic applications are given. These relate to the processing of oil shale, the granulation of industrial fertilizers...... sands processing. The fertilizer granulation model considers the dynamics of MAP-DAP (mono and diammonium phosphates) production within an industrial granulator, that involves complex crystallisation, chemical reaction and particle growth, captured through population balances. A final example considers...
Belkhatir, Zehor
2016-08-05
This paper deals with joint parameters and input estimation for coupled PDE-ODE system. The system consists of a damped wave equation and an infinite dimensional ODE. This model describes the spatiotemporal hemodynamic response in the brain and the objective is to characterize brain regions using functional Magnetic Resonance Imaging (fMRI) data. For this reason, we propose an adaptive estimator and prove the asymptotic convergence of the state, the unknown input and the unknown parameters. The proof is based on a Lyapunov approach combined with a priori identifiability assumptions. The performance of the proposed observer is illustrated through some simulation results.
Influence of magnetospheric inputs definition on modeling of ionospheric storms
Tashchilin, A. V.; Romanova, E. B.; Kurkin, V. I.
Usually for numerical modeling of ionospheric storms corresponding empirical models specify parameters of neutral atmosphere and magnetosphere. Statistical kind of these models renders them impractical for simulation of the individual storm. Therefore one has to correct the empirical models using various additional speculations. The influence of magnetospheric inputs such as distributions of electric potential, number and energy fluxes of the precipitating electrons on the results of the ionospheric storm simulations has been investigated in this work. With this aim for the strong geomagnetic storm on September 25, 1998 hour global distributions of those magnetospheric inputs from 20 to 27 September were calculated by the magnetogram inversion technique (MIT). Then with the help of 3-D ionospheric model two variants of ionospheric response to this magnetic storm were simulated using MIT data and empirical models of the electric fields (Sojka et al., 1986) and electron precipitations (Hardy et al., 1985). The comparison of the received results showed that for high-latitude and subauroral stations the daily variations of electron density calculated with MIT data are more close to observations than those of empirical models. In addition using of the MIT data allows revealing some peculiarities in the daily variations of electron density during strong geomagnetic storm. References Sojka J.J., Rasmussen C.E., Schunk R.W. J.Geophys.Res., 1986, N10, p.11281. Hardy D.A., Gussenhoven M.S., Holeman E.A. J.Geophys.Res., 1985, N5, p.4229.
Mellinger, Philippe; Döhler, Michael; Mevel, Laurent
2016-09-01
An important step in the operational modal analysis of a structure is to infer on its dynamic behavior through its modal parameters. They can be estimated by various modal identification algorithms that fit a theoretical model to measured data. When output-only data is available, i.e. measured responses of the structure, frequencies, damping ratios and mode shapes can be identified assuming that ambient sources like wind or traffic excite the system sufficiently. When also input data is available, i.e. signals used to excite the structure, input/output identification algorithms are used. The use of input information usually provides better modal estimates in a desired frequency range. While the identification of the modal mass is not considered in this paper, we focus on the estimation of the frequencies, damping ratios and mode shapes, relevant for example for modal analysis during in-flight monitoring of aircrafts. When identifying the modal parameters from noisy measurement data, the information on their uncertainty is most relevant. In this paper, new variance computation schemes for modal parameters are developed for four subspace algorithms, including output-only and input/output methods, as well as data-driven and covariance-driven methods. For the input/output methods, the known inputs are considered as realizations of a stochastic process. Based on Monte Carlo validations, the quality of identification, accuracy of variance estimations and sensor noise robustness are discussed. Finally these algorithms are applied on real measured data obtained during vibrations tests of an aircraft.
Robust estimation of hydrological model parameters
Directory of Open Access Journals (Sweden)
A. Bárdossy
2008-11-01
Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.
Analytical delay models for RLC interconnects under ramp input
Institute of Scientific and Technical Information of China (English)
REN Yinglei; MAO Junfa; LI Xiaochun
2007-01-01
Analytical delay models for Resistance Inductance Capacitance (RLC)interconnects with ramp input are presented for difierent situations,which include overdamped,underdamped and critical response cases.The errors of delay estimation using the analytical models proposed in this paper are less bv 3%in comparison to the SPICE-computed delay.These models are meaningful for the delay analysis of actual circuits in which the input signal is ramp but not ideal step input.
Li, W. P.; Luo, B.; Huang, H.
2016-02-01
This paper presents a vibration control strategy for a two-link Flexible Joint Manipulator (FJM) with a Hexapod Active Manipulator (HAM). A dynamic model of the multi-body, rigid-flexible system composed of an FJM, a HAM and a spacecraft was built. A hybrid controller was proposed by combining the Input Shaping (IS) technique with an Adaptive-Parameter Auto Disturbance Rejection Controller (APADRC). The controller was used to suppress the vibration caused by external disturbances and input motions. Parameters of the APADRC were adaptively adjusted to ensure the characteristic of the closed loop system to be a given reference system, even if the configuration of the manipulator significantly changes during motion. Because precise parameters of the flexible manipulator are not required in the IS system, the operation of the controller was sufficiently robust to accommodate uncertainties in system parameters. Simulations results verified the effectiveness of the HAM scheme and controller in the vibration suppression of FJM during operation.
On Optimal Input Design and Model Selection for Communication Channels
Energy Technology Data Exchange (ETDEWEB)
Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL
2013-01-01
In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.
Parameter identification in tidal models with uncertain boundaries
Bagchi, Arunabha; ten Brummelhuis, P.G.J.; ten Brummelhuis, Paul
1994-01-01
In this paper we consider a simultaneous state and parameter estimation procedure for tidal models with random inputs, which is formulated as a minimization problem. It is assumed that some model parameters are unknown and that the random noise inputs only act upon the open boundaries. The
Parameter identification in tidal models with uncertain boundaries
Bagchi, Arunabha; Brummelhuis, ten Paul
1994-01-01
In this paper we consider a simultaneous state and parameter estimation procedure for tidal models with random inputs, which is formulated as a minimization problem. It is assumed that some model parameters are unknown and that the random noise inputs only act upon the open boundaries. The hyperboli
Star Classification for the Kepler Input Catalog: From Images to Stellar Parameters
Brown, T. M.; Everett, M.; Latham, D. W.; Monet, D. G.
2005-12-01
The Stellar Classification Project is a ground-based effort to screen stars within the Kepler field of view, to allow removal of stars with large radii (and small potential transit signals) from the target list. Important components of this process are: (1) An automated photometry pipeline estimates observed magnitudes both for target stars and for stars in several calibration fields. (2) Data from calibration fields yield extinction-corrected AB magnitudes (with g, r, i, z magnitudes transformed to the SDSS system). We merge these with 2MASS J, H, K magnitudes. (3) The Basel grid of stellar atmosphere models yields synthetic colors, which are transformed to our photometric system by calibration against observations of stars in M67. (4) We combine the r magnitude and stellar galactic latitude with a simple model of interstellar extinction to derive a relation connecting {Teff, luminosity} to distance and reddening. For models satisfying this relation, we compute a chi-squared statistic describing the match between each model and the observed colors. (5) We create a merit function based on the chi-squared statistic, and on a Bayesian prior probability distribution which gives probability as a function of Teff, luminosity, log(Z), and height above the galactic plane. The stellar parameters ascribed to a star are those of the model that maximizes this merit function. (6) Parameter estimates are merged with positional and other information from extant catalogs to yield the Kepler Input Catalog, from which targets will be chosen. Testing and validation of this procedure are underway, with encouraging initial results.
Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.
2017-09-01
A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
Energy Technology Data Exchange (ETDEWEB)
Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin, E-mail: dengbin@tju.edu.cn; Chan, Wai-lok [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)
2016-06-15
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok
2016-06-01
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.
Input Response of Neural Network Model with Lognormally Distributed Synaptic Weights
Nagano, Yoshihiro; Karakida, Ryo; Watanabe, Norifumi; Aoyama, Atsushi; Okada, Masato
2016-07-01
Neural assemblies in the cortical microcircuit can sustain irregular spiking activity without external inputs. On the other hand, neurons exhibit rich evoked activities driven by sensory stimulus, and both activities are reported to contribute to cognitive functions. We studied the external input response of the neural network model with lognormally distributed synaptic weights. We show that the model can achieve irregular spontaneous activity and population oscillation depending on the presence of external input. The firing rate distribution was maintained for the external input, and the order of firing rates in evoked activity reflected that in spontaneous activity. Moreover, there were bistable regions in the inhibitory input parameter space. The bimodal membrane potential distribution, which is a characteristic feature of the up-down state, was obtained under such conditions. From these results, we can conclude that the model displays various evoked activities due to the external input and is biologically plausible.
SIMPLE MODEL FOR THE INPUT IMPEDANCE OF RECTANGULAR MICROSTRIP ANTENNA
Directory of Open Access Journals (Sweden)
Celal YILDIZ
1998-03-01
Full Text Available A very simple model for the input impedance of a coax-fed rectangular microstrip patch antenna is presented. It is based on the cavity model and the equivalent resonant circuits. The theoretical input impedance results obtained from this model are in good agreement with the experimental results available in the literature. This model is well suited for computer-aided design (CAD.
Sliding mode identifier for parameter uncertain nonlinear dynamic systems with nonlinear input
Institute of Scientific and Technical Information of China (English)
张克勤; 庄开宇; 苏宏业; 褚健; 高红
2002-01-01
This paper presents a sliding mode (SM) based identifier to deal wit h the parameter identification problem for a class of parameter uncertain nonlin ear dynamic systems with input nonlinearity. A sliding mode controller (SMC) is used to ensure the global reaching condition of the sliding mode for the nonline ar system; an identifier is designed to identify the uncertain parameter of the nonlinear system. A numerical example is studied to show the feasibility of the SM controller and the asymptotical convergence of the identifier.
Sliding mode identifier for parameter uncertain nonlinear dynamic systems with nonlinear input
Institute of Scientific and Technical Information of China (English)
张克勤; 庄开宇; 苏宏业; 褚健; 高红
2002-01-01
This paper presents a sliding mode(SM) based identifier to deal with the parameter idenfification problem for a class of parameter uncertain nonlinear dynamic systems with input nonlinearity. A sliding mode controller (SMC) is used to ensure the global reaching condition of the sliding mode for the nonlinear system;an identifier is designed to identify the uncertain parameter of the nonlinear system. A numerical example is studied to show the feasibility of the SM controller and the asymptotical convergence of the identifier.
Oates, Alison R; Hauck, Laura; Moraes, Renato; Sibley, Kathryn M
2017-08-09
Walking is an important component of daily life requiring sensorimotor integration to be successful. Adding haptic input via light touch or anchors has been shown to improve standing balance; however, the effect of adding haptic input on walking is not clear. This scoping review systematically summarizes the current evidence regarding the addition of haptic input on walking in adults. Following an established protocol, relevant studies were identified using indexed data bases (Medline, EMBASE, PsychINFO, Google Scholar) and hand searches of published review articles on related topics. 644 references were identified and screened by a minimum of two independent researchers before data was extracted from 17 studies. A modified TREND tool was used to assess quality of the references which showed that the majority of studies were of moderate or high quality. Results show that adding haptic input changes walking behaviour. In particular, there is an immediate reduction in variability of gait step parameters and whole body stability, as well as a decrease in lower limb muscle activity. The effect of added haptic input on reflex modulation may depend on the limb of interest (i.e., upper or lower limb). Many studies did not clearly describe the amount and/or direction of haptic input applied. This information is needed to replicate and/or advance their results. More investigations into the use and design of the haptic tools, the attentional demands of adding haptic input, and clarity on short-term effects are needed. In addition, more is research needed to determine whether adding haptic input has significant, lasting benefits that may translate to fall prevention efforts. Copyright © 2017 Elsevier B.V. All rights reserved.
Parameter Estimation for Thurstone Choice Models
Energy Technology Data Exchange (ETDEWEB)
Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-04-24
We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.
Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V; Rooney, William D; Garzotto, Mark G; Springer, Charles S
2016-08-01
Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (K(trans)) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging
Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V.; Rooney, William D.; Garzotto, Mark G.; Springer, Charles S.
2016-08-01
Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (Ktrans) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging
Storm-impact scenario XBeach model inputs and tesults
Mickey, Rangley; Long, Joseph W.; Thompson, David M.; Plant, Nathaniel G.; Dalyander, P. Soupy
2017-01-01
The XBeach model input and output of topography and bathymetry resulting from simulation of storm-impact scenarios at the Chandeleur Islands, LA, as described in USGS Open-File Report 2017–1009 (https://doi.org/10.3133/ofr20171009), are provided here. For further information regarding model input generation and visualization of model output topography and bathymetry refer to USGS Open-File Report 2017–1009 (https://doi.org/10.3133/ofr20171009).
Araújo, Fabíola; Filho, José; Klautau, Aldebaro
2016-12-01
Voice imitation basically consists in estimating a synthesizer's input parameters to mimic a target speech signal. This is a difficult inverse problem because the mapping is time-varying, non-linear and from many to one. It typically requires considerable amount of time to be done manually. This work presents the evolution of a system based on a genetic algorithm (GA) to automatically estimate the input parameters of the Klatt and HLSyn formant synthesizers using an analysis-by-synthesis process. Results are presented for natural (human-generated) speech for three male speakers. The results obtained with the GA-based system outperform those obtained with the baseline Winsnoori with respect to four objective figures of merit and a subjective test. The GA with Klatt synthesizer generated similar voices to the target and the subjective tests indicate an improvement in the quality of the synthetic voices when compared to the ones produced by the baseline.
Preisach models of hysteresis driven by Markovian input processes
Schubert, Sven; Radons, Günter
2017-08-01
We study the response of Preisach models of hysteresis to stochastically fluctuating external fields. We perform numerical simulations, which indicate that analytical expressions derived previously for the autocorrelation functions and power spectral densities of the Preisach model with uncorrelated input, hold asymptotically also if the external field shows exponentially decaying correlations. As a consequence, the mechanisms causing long-term memory and 1 /f noise in Preisach models with uncorrelated inputs still apply in the presence of fast decaying input correlations. We collect additional evidence for the importance of the effective Preisach density previously introduced even for Preisach models with correlated inputs. Additionally, we present some results for the output of the Preisach model with uncorrelated input using analytical methods. It is found, for instance, that in order to produce the same long-time tails in the output, the elementary hysteresis loops of large width need to have a higher weight for the generic Preisach model than for the symmetric Preisach model. Further, we find autocorrelation functions and power spectral densities to be monotonically decreasing independently of the choice of input and Preisach density.
Model-Free importance indicators for dependent input
Energy Technology Data Exchange (ETDEWEB)
Saltelli, A.; Ratto, M.; Tarantola, S
2001-07-01
A number of methods are available to asses uncertainty importance in the predictions of a simulation model for orthogonal sets of uncertain input factors. However, in many practical cases input factors are correlated. Even for these cases it is still possible to compute the correlation ratio and the partial (or incremental) importance measure, two popular sensitivity measures proposed in the recent literature on the subject. Unfortunately, the existing indicators of importance have limitations in terms of their use in sensitivity analysis of model output. Correlation ratios are indeed effective for priority setting (i.e. to find out what input factor needs better determination) but not, for instance, for the identification of the subset of the most important input factors, or for model simplification. In such cases other types of indicators are required that can cope with the simultaneous occurrence of correlation and interaction (a property of the model) among the input factors. In (1) the limitations of current measures of importance were discussed and a general approach was identified to quantify uncertainty importance for correlated inputs in terms of different betting contexts. This work was later submitted to the Journal of the American Statistical Association. However, the computational cost of such approach is still high, as it happens when dealing with correlated input factors. In this paper we explore how suitable designs could reduce the numerical load of the analysis. (Author) 11 refs.
Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models
Rothenberger, Michael J.
This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input
Space market model space industry input-output model
Hodgin, Robert F.; Marchesini, Roberto
1987-01-01
The goal of the Space Market Model (SMM) is to develop an information resource for the space industry. The SMM is intended to contain information appropriate for decision making in the space industry. The objectives of the SMM are to: (1) assemble information related to the development of the space business; (2) construct an adequate description of the emerging space market; (3) disseminate the information on the space market to forecasts and planners in government agencies and private corporations; and (4) provide timely analyses and forecasts of critical elements of the space market. An Input-Output model of market activity is proposed which are capable of transforming raw data into useful information for decision makers and policy makers dealing with the space sector.
Rosen, I G; Luczak, Susan E; Weiss, Jordan
2014-03-15
We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.
Institute of Scientific and Technical Information of China (English)
Mohammad Pourmahmood Aghababa; Hassan Feizi
2012-01-01
This paper deals with the design of a novel nonsingular terminal sliding mode controller for finite-time synchronization of two different chaotic systems with fully unknown parameters and nonlinear inputs.We propose a novel nonsingular terminal sliding surface and prove its finite-time convergence to zero.We assume that both the master's and the slave's system parameters are unknown in advance.Proper adaptation laws are derived to tackle the unknown parameters.An adaptive sliding mode control law is designed to ensure the existence of the sliding mode in finite time.We prove that both reaching and sliding mode phases are stable in finite time.An estimation of convergence time is given.Two illustrative examples show the effectiveness and usefulness of the proposed technique.It is worthwhile noticing that the introduced nonsingular terminal sliding mode can be applied to a wide variety of nonlinear control problems.
Experimental Determination of E-Cloud Simulation Input Parameters for DAFNE
Vaccarezza, Cristina; Giglia, Angelo; Mahne, Nicola; Nannarone, Stefano
2005-01-01
After the first experimental observations compatible with the presence of the electron-cloud effect in the DAFNE positron ring, an experimental campaign has been started to measure realistic parameters to be used in the simulation codes. Here we present a synchrotron radiation experiment on the photon reflectivity from the actual Al vacuum chamber of DAFNE (same material, roughness and surface cleaning as the one used to manufacture the ring) in the same energy range of photons produced by the accelerator itself. The derived experimental parameter has than been included in the e-cloud simulation codes and the obtained results confirm the relevance of the detailed knowledge of the input parameter to obtain reliable e-cloud simulations.
Sensitivity Analysis of the ALMANAC Model's Input Variables
Institute of Scientific and Technical Information of China (English)
XIE Yun; James R.Kiniry; Jimmy R.Williams; CHEN You-min; LIN Er-da
2002-01-01
Crop models often require extensive input data sets to realistically simulate crop growth. Development of such input data sets can be difficult for some model users. The objective of this study was to evaluate the importance of variables in input data sets for crop modeling. Based on published hybrid performance trials in eight Texas counties, we developed standard data sets of 10-year simulations of maize and sorghum for these eight counties with the ALMANAC (Agricultural Land Management Alternatives with Numerical Assessment Criteria) model. The simulation results were close to the measured county yields with relative error only 2.6%for maize, and - 0.6% for sorghum. We then analyzed the sensitivity of grain yield to solar radiation, rainfall, soil depth, soil plant available water, and runoff curve number, comparing simulated yields to those with the original, standard data sets. Runoff curve number changes had the greatest impact on simulated maize and sorghum yields for all the counties. The next most critical input was rainfall, and then solar radiation for both maize and sorghum, especially for the dryland condition. For irrigated sorghum, solar radiation was the second most critical input instead of rainfall. The degree of sensitivity of yield to all variables for maize was larger than for sorghum except for solar radiation. Many models use a USDA curve number approach to represent soil water redistribution, so it will be important to have accurate curve numbers, rainfall, and soil depth to realistically simulate yields.
Hernández, Mario R.; Francés, Félix
2015-04-01
One phase of the hydrological models implementation process, significantly contributing to the hydrological predictions uncertainty, is the calibration phase in which values of the unknown model parameters are tuned by optimizing an objective function. An unsuitable error model (e.g. Standard Least Squares or SLS) introduces noise into the estimation of the parameters. The main sources of this noise are the input errors and the hydrological model structural deficiencies. Thus, the biased calibrated parameters cause the divergence model phenomenon, where the errors variance of the (spatially and temporally) forecasted flows far exceeds the errors variance in the fitting period, and provoke the loss of part or all of the physical meaning of the modeled processes. In other words, yielding a calibrated hydrological model which works well, but not for the right reasons. Besides, an unsuitable error model yields a non-reliable predictive uncertainty assessment. Hence, with the aim of prevent all these undesirable effects, this research focuses on the Bayesian joint inference (BJI) of both the hydrological and error model parameters, considering a general additive (GA) error model that allows for correlation, non-stationarity (in variance and bias) and non-normality of model residuals. As hydrological model, it has been used a conceptual distributed model called TETIS, with a particular split structure of the effective model parameters. Bayesian inference has been performed with the aid of a Markov Chain Monte Carlo (MCMC) algorithm called Dream-ZS. MCMC algorithm quantifies the uncertainty of the hydrological and error model parameters by getting the joint posterior probability distribution, conditioned on the observed flows. The BJI methodology is a very powerful and reliable tool, but it must be used correctly this is, if non-stationarity in errors variance and bias is modeled, the Total Laws must be taken into account. The results of this research show that the
Quality assurance of weather data for agricultural system model input
It is well known that crop production and hydrologic variation on watersheds is weather related. Rarely, however, is meteorological data quality checks reported for agricultural systems model research. We present quality assurance procedures for agricultural system model weather data input. Problems...
Optimization of precipitation inputs for SWAT modeling in mountainous catchment
Tuo, Ye; Chiogna, Gabriele; Disse, Markus
2016-04-01
Precipitation is often the most important input data in hydrological models when simulating streamflow in mountainous catchment. The Soil and Water Assessment Tool (SWAT), a widely used hydrological model, only makes use of data from one precipitation gauging station which is nearest to the centroid of each subcatchment, eventually corrected using the band elevation method. This leads in general to inaccurate subcatchment precipitation representation, which results in unreliable simulation results in mountainous catchment. To investigate the impact of the precipitation inputs and consider the high spatial and temporal variability of precipitation, we first interpolated 21 years (1990-2010) of daily measured data using the Inverse Distance Weighting (IDW) method. Averaged IDW daily values have been calculated at the subcatchment scale to be further supplied as optimized precipitation inputs for SWAT. Both datasets (Measured data and IDW data) are applied to three Alpine subcatchments of the Adige catchment (North-eastern Italy, 12100 km2) as precipitation inputs. Based on the calibration and validation results, model performances are evaluated according to the Nash Sutchliffe Efficiency (NSE) and Coefficient of Determination (R2). For all three subcatchments, the simulation results with IDW inputs are better than the original method which uses measured inputs from the nearest station. This suggests that IDW method could improve the model performance in Alpine catchments to some extent. By taking into account and weighting the distance between precipitation records, IDW supplies more accurate precipitation inputs for each individual Alpine subcatchment, which would as a whole lead to an improved description of the hydrological behavior of the entire Adige catchment.
THE EFFECT OF NITROGEN INPUT ON POLARISED SUGAR PRODUCTION AND QUALITATIVE PARAMETERS OF SUGAR BEET
Directory of Open Access Journals (Sweden)
MILAN MACÁK
2007-11-01
Full Text Available During 1998-2002, the application of different forms and doses of nitrogen on quantitative (polarised sugar productionand qualitative parameters (digestion, molasses forming components - potassium, sodium and α-amino nitrogen content of sugar beet in vulnerable zones (Nitrate directive was studied. Calculated input of nitrogen ranged from 12 kg up to 240 kg N.ha-1. By increasing input of N from FYM application into the soil causes an increases of α- amino nitrogen content in root, which in consequence causes a decreases the sugar content (negative correlation r= -0.8659+. The application of straw instead FYM of analogues treatments caused significant decrease (straw versus FYM and highly significant decrease (straw plus N fertilizers versus FYM plus N fertilizers of α-amino nitrogen content in sugar beet root living the productive parameters unchanged. The content of α-amino nitrogen in root of sugar beet indicate an environmentally friendly management practices with causal relation to water protection from nitrate.
The use of synthetic input sequences in time series modeling
Energy Technology Data Exchange (ETDEWEB)
Oliveira, Dair Jose de [Programa de Pos-Graduacao em Engenharia Eletrica, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, 31.270-901 Belo Horizonte, MG (Brazil); Letellier, Christophe [CORIA/CNRS UMR 6614, Universite et INSA de Rouen, Av. de l' Universite, BP 12, F-76801 Saint-Etienne du Rouvray cedex (France); Gomes, Murilo E.D. [Programa de Pos-Graduacao em Engenharia Eletrica, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, 31.270-901 Belo Horizonte, MG (Brazil); Aguirre, Luis A. [Programa de Pos-Graduacao em Engenharia Eletrica, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, 31.270-901 Belo Horizonte, MG (Brazil)], E-mail: aguirre@cpdee.ufmg.br
2008-08-04
In many situations time series models obtained from noise-like data settle to trivial solutions under iteration. This Letter proposes a way of producing a synthetic (dummy) input, that is included to prevent the model from settling down to a trivial solution, while maintaining features of the original signal. Simulated benchmark models and a real time series of RR intervals from an ECG are used to illustrate the procedure.
The use of synthetic input sequences in time series modeling
de Oliveira, Dair José; Letellier, Christophe; Gomes, Murilo E. D.; Aguirre, Luis A.
2008-08-01
In many situations time series models obtained from noise-like data settle to trivial solutions under iteration. This Letter proposes a way of producing a synthetic (dummy) input, that is included to prevent the model from settling down to a trivial solution, while maintaining features of the original signal. Simulated benchmark models and a real time series of RR intervals from an ECG are used to illustrate the procedure.
PARAMETER ESTIMATION IN BREAD BAKING MODEL
Directory of Open Access Journals (Sweden)
Hadiyanto Hadiyanto
2012-05-01
Full Text Available Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally product quality parameters. There was a fair agreement between the calibrated model results and the experimental data. The results showed that the applied simple qualitative relationships for quality performed above expectation. Furthermore, it was confirmed that the microwave input is most meaningful for the internal product properties and not for the surface properties as crispness and color. The model with adjusted parameters was applied in a quality driven food process design procedure to derive a dynamic operation pattern, which was subsequently tested experimentally to calibrate the model. Despite the limited calibration with fixed operation settings, the model predicted well on the behavior under dynamic convective operation and on combined convective and microwave operation. It was expected that the suitability between model and baking system could be improved further by performing calibration experiments at higher temperature and various microwave power levels. Abstrak PERKIRAAN PARAMETER DALAM MODEL UNTUK PROSES BAKING ROTI. Kualitas produk roti sangat tergantung pada proses baking yang digunakan. Suatu model yang telah dikembangkan dengan metode kualitatif dan kuantitaif telah dikalibrasi dengan percobaan pada temperatur 200oC dan dengan kombinasi dengan mikrowave pada 100 Watt. Parameter-parameter model diestimasi dengan prosedur bertahap yaitu pertama, parameter pada model perpindahan masa dan panas, parameter pada model transformasi, dan
Input-output model for MACCS nuclear accident impacts estimation¹
Energy Technology Data Exchange (ETDEWEB)
Outkin, Alexander V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bixler, Nathan E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vargas, Vanessa N [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-01-27
Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domestic product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.
Photovoltaic module parameters acquisition model
Cibira, Gabriel; Koščová, Marcela
2014-09-01
This paper presents basic procedures for photovoltaic (PV) module parameters acquisition using MATLAB and Simulink modelling. In first step, MATLAB and Simulink theoretical model are set to calculate I-V and P-V characteristics for PV module based on equivalent electrical circuit. Then, limited I-V data string is obtained from examined PV module using standard measurement equipment at standard irradiation and temperature conditions and stated into MATLAB data matrix as a reference model. Next, the theoretical model is optimized to keep-up with the reference model and to learn its basic parameters relations, over sparse data matrix. Finally, PV module parameters are deliverable for acquisition at different realistic irradiation, temperature conditions as well as series resistance. Besides of output power characteristics and efficiency calculation for PV module or system, proposed model validates computing statistical deviation compared to reference model.
Model reduction of nonlinear systems subject to input disturbances
Ndoye, Ibrahima
2017-07-10
The method of convex optimization is used as a tool for model reduction of a class of nonlinear systems in the presence of disturbances. It is shown that under some conditions the nonlinear disturbed system can be approximated by a reduced order nonlinear system with similar disturbance-output properties to the original plant. The proposed model reduction strategy preserves the nonlinearity and the input disturbance nature of the model. It guarantees a sufficiently small error between the outputs of the original and the reduced-order systems, and also maintains the properties of input-to-state stability. The matrices of the reduced order system are given in terms of a set of linear matrix inequalities (LMIs). The paper concludes with a demonstration of the proposed approach on model reduction of a nonlinear electronic circuit with additive disturbances.
Mode choice model parameters estimation
Strnad, Irena
2010-01-01
The present work focuses on parameter estimation of two mode choice models: multinomial logit and EVA 2 model, where four different modes and five different trip purposes are taken into account. Mode choice model discusses the behavioral aspect of mode choice making and enables its application to a traffic model. Mode choice model includes mode choice affecting trip factors by using each mode and their relative importance to choice made. When trip factor values are known, it...
DEFF Research Database (Denmark)
Zborowski, C.; Renault, O; Torres, A
2017-01-01
scattering cross-section in the form of a weighted sum of individual cross-sections of the pure layers. In this study, we have experimentally investigated this by analyzing Al/Ta/AlGaN stacks on a GaN substrate. We present a refined analytical method, based on the use of a reference spectrum, for determining...... the required input parameters, i.e. the inelastic mean free path and the effective inelastic scattering cross-section. The use of a reference sample gives extra constraints which make the analysis faster to converge towards a more accurate result. Based on comparisons with TEM, the improved method provides...... results determined with a deviation typically better than 5% instead of around 10% without reference. The case of much thicker overlayers up to 66. nm is also discussed, notably in terms of accounting for elastic scattering in the analysis....
Parameter optimization model in electrical discharge machining process
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
Electrical discharge machining (EDM) process, at present is still an experience process, wherein selected parameters are often far from the optimum, and at the same time selecting optimization parameters is costly and time consuming. In this paper,artificial neural network (ANN) and genetic algorithm (GA) are used together to establish the parameter optimization model. An ANN model which adapts Levenberg-Marquardt algorithm has been set up to represent the relationship between material removal rate (MRR) and input parameters, and GA is used to optimize parameters, so that optimization results are obtained. The model is shown to be effective, and MRR is improved using optimized machining parameters.
INPUT MODELLING USING STATISTICAL DISTRIBUTIONS AND ARENA SOFTWARE
Directory of Open Access Journals (Sweden)
Elena Iuliana GINGU (BOTEANU
2015-05-01
Full Text Available The paper presents a method of choosing properly the probability distributions for failure time in a flexible manufacturing system. Several well-known distributions often provide good approximation in practice. The commonly used continuous distributions are: Uniform, Triangular, Beta, Normal, Lognormal, Weibull, and Exponential. In this article is studied how to use the Input Analyzer in the simulation language Arena to fit probability distributions to data, or to evaluate how well a particular distribution. The objective was to provide the selection of the most appropriate statistical distributions and to estimate parameter values of failure times for each machine of a real manufacturing line.
Application of a Linear Input/Output Model to Tankless Water Heaters
Energy Technology Data Exchange (ETDEWEB)
Butcher T.; Schoenbauer, B.
2011-12-31
In this study, the applicability of a linear input/output model to gas-fired, tankless water heaters has been evaluated. This simple model assumes that the relationship between input and output, averaged over both active draw and idle periods, is linear. This approach is being applied to boilers in other studies and offers the potential to make a small number of simple measurements to obtain the model parameters. These parameters can then be used to predict performance under complex load patterns. Both condensing and non-condensing water heaters have been tested under a very wide range of load conditions. It is shown that this approach can be used to reproduce performance metrics, such as the energy factor, and can be used to evaluate the impacts of alternative draw patterns and conditions.
Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten
2015-04-01
Predicting flood inundation extents using hydraulic models is subject to a number of critical uncertainties. For a specific event, these uncertainties are known to have a large influence on model outputs and any subsequent analyses made by risk managers. Hydraulic modellers often approach such problems by applying uncertainty analysis techniques such as the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. However, these methods do not allow one to attribute which source of uncertainty has the most influence on the various model outputs that inform flood risk decision making. Another issue facing modellers is the amount of computational resource that is available to spend on modelling flood inundations that are 'fit for purpose' to the modelling objectives. Therefore a balance needs to be struck between computation time, realism and spatial resolution, and effectively characterising the uncertainty spread of predictions (for example from boundary conditions and model parameterisations). However, it is not fully understood how much of an impact each factor has on model performance, for example how much influence changing the spatial resolution of a model has on inundation predictions in comparison to other uncertainties inherent in the modelling process. Furthermore, when resampling fine scale topographic data in the form of a Digital Elevation Model (DEM) to coarser resolutions, there are a number of possible coarser DEMs that can be produced. Deciding which DEM is then chosen to represent the surface elevations in the model could also influence model performance. In this study we model a flood event using the hydraulic model LISFLOOD-FP and apply Sobol' Sensitivity Analysis to estimate which input factor, among the uncertainty in model boundary conditions, uncertain model parameters, the spatial resolution of the DEM and the choice of resampled DEM, have the most influence on a range of model outputs. These outputs include whole domain maximum
Assessing and propagating uncertainty in model inputs in corsim
Energy Technology Data Exchange (ETDEWEB)
Molina, G.; Bayarri, M. J.; Berger, J. O.
2001-07-01
CORSIM is a large simulator for vehicular traffic, and is being studied with respect to its ability to successfully model and predict behavior of traffic in a 36 block section of Chicago. Inputs to the simulator include information about street configuration, driver behavior, traffic light timing, turning probabilities at each corner and distributions of traffic ingress into the system. This work is described in more detail in the article Fast Simulators for Assessment and Propagation of Model Uncertainty also in these proceedings. The focus of this conference poster is on the computational aspects of this problem. In particular, we address the description of the full conditional distributions needed for implementation of the MCMC algorithm and, in particular, how the constraints can be incorporated; details concerning the run time and convergence of the MCMC algorithm; and utilisation of the MCMC output for prediction and uncertainty analysis concerning the CORSIM computer model. As this last is the ultimate goal, it is worth emphasizing that the incorporation of all uncertainty concerning inputs can significantly affect the model predictions. (Author)
Research of the Influences of Input Parameters on the Result of Vehicles Collision Simulation
Directory of Open Access Journals (Sweden)
Vuk Bogdanović
2012-05-01
Full Text Available Vehicle collisions are complex processes which are determined by a large number of different parameters. The development of computer programs for simulation has made the collision analysis and reconstruction procedure easier, as well as the possibility to realise the influences of different parameters on collision processes, which was not possible while using classical methods. The quality of results of vehicle collision simulation and reconstruction is expressed by an error which is determined on the basis of the difference between vehicles stopping positions, which was obtained by the simulation of established vehicles stopping positions in real collisions. Being acquainted with the influence of collision parameters on the simulation error enables the development of more reliable models for automatic optimisation of the collision process and reduction of the number of iterations in the procedure of a collision reconstruction. Within the scope of this paper, the analysis and classification of different collision parameters have been carried out. It has been done by the degree of the influence on the error in the simulation process in the software package Virtual CRASH. Varying twenty different collision parameters on the sample of seven crash tests, their influence on the distance, trajectory and angular error has been analysed, and ten parameters with the highest level of influence (centre of gravity position from front axle of vehicle 1, restitution coefficient, collision place in longitudinal direction, collision place in transverse direction, centre of gravity height-vehicle2, centre of gravity height-vehicle1, collision angle, contact plane angle, slowing down the vehicle and vehicle movement direction have been distinguished.
DEFF Research Database (Denmark)
Bigoni, Daniele; Engsig-Karup, Allan Peter; Eskilsson, Claes
2016-01-01
of the evolution of waves. The model is analyzed using random sampling techniques and nonintrusive methods based on generalized polynomial chaos (PC). These methods allow us to accurately and efficiently estimate the probability distribution of the solution and require only the computation of the solution...... at different points in the parameter space, allowing for the reuse of existing simulation software. The choice of the applied methods is driven by the number of uncertain input parameters and by the fact that finding the solution of the considered model is computationally intensive. We revisit experimental...
System Identification for Nonlinear FOPDT Model with Input-Dependent Dead-Time
DEFF Research Database (Denmark)
Sun, Zhen; Yang, Zhenyu
2011-01-01
. In order to identify these parameters in an online manner, the considered system is discretized at first. Then, the nonlinear FOPDT identification problem is formulated as a stochastic Mixed Integer Non-Linear Programming problem, and an identification algorithm is proposed by combining the Branch......An on-line iterative method of system identification for a kind of nonlinear FOPDT system is proposed in the paper. The considered nonlinear FOPDT model is an extension of the standard FOPDT model by means that its dead time depends on the input signal and the other parameters are time dependent...
Evaluating the uncertainty of input quantities in measurement models
Possolo, Antonio; Elster, Clemens
2014-06-01
The Guide to the Expression of Uncertainty in Measurement (GUM) gives guidance about how values and uncertainties should be assigned to the input quantities that appear in measurement models. This contribution offers a concrete proposal for how that guidance may be updated in light of the advances in the evaluation and expression of measurement uncertainty that were made in the course of the twenty years that have elapsed since the publication of the GUM, and also considering situations that the GUM does not yet contemplate. Our motivation is the ongoing conversation about a new edition of the GUM. While generally we favour a Bayesian approach to uncertainty evaluation, we also recognize the value that other approaches may bring to the problems considered here, and focus on methods for uncertainty evaluation and propagation that are widely applicable, including to cases that the GUM has not yet addressed. In addition to Bayesian methods, we discuss maximum-likelihood estimation, robust statistical methods, and measurement models where values of nominal properties play the same role that input quantities play in traditional models. We illustrate these general-purpose techniques in concrete examples, employing data sets that are realistic but that also are of conveniently small sizes. The supplementary material available online lists the R computer code that we have used to produce these examples (stacks.iop.org/Met/51/3/339/mmedia). Although we strive to stay close to clause 4 of the GUM, which addresses the evaluation of uncertainty for input quantities, we depart from it as we review the classes of measurement models that we believe are generally useful in contemporary measurement science. We also considerably expand and update the treatment that the GUM gives to Type B evaluations of uncertainty: reviewing the state-of-the-art, disciplined approach to the elicitation of expert knowledge, and its encapsulation in probability distributions that are usable in
Kernel Principal Component Analysis for Stochastic Input Model Generation (PREPRINT)
2010-08-17
c ( )d Fig. 13. Contour of saturation at 0.2 PVI : MC mean (a) and variance (b) from experimental samples; MC mean (c) and variance (d) from PC...realizations. The contour plots of saturation at 0.2 PVI are given in Fig. 13. PVI represents dimensionless time and is computed as PVI = ∫ Q dt/Vp...stochastic input model provides a fast way to generate many realizations, which are consistent, in a useful sense, with the experimental data. PVI M ea n
Performance Comparison of Sub Phonetic Model with Input Signal Processing
Directory of Open Access Journals (Sweden)
Dr E. Ramaraj
2006-01-01
Full Text Available The quest to arrive at a better model for signal transformation for speech has resulted in striving to develop better signal representations and algorithm. The article explores the word model which is a concatenation of state dependent senones as an alternate for phoneme. The Research Work has an objective of involving the senone with the Input signal processing an algorithm which has been tried with phoneme and has been quite successful and try to compare the performance of senone with ISP and Phoneme with ISP and supply the result analysis. The research model has taken the SPHINX IV[4] speech engine for its implementation owing to its flexibility to the new algorithm, robustness and performance consideration.
Energy Technology Data Exchange (ETDEWEB)
Butcher, B.M.
1997-08-01
A summary of the input parameter values used in final predictions of closure and waste densification in the Waste Isolation Pilot Plant disposal room is presented, along with supporting references. These predictions are referred to as the final porosity surface data and will be used for WIPP performance calculations supporting the Compliance Certification Application to be submitted to the U.S. Environmental Protection Agency. The report includes tables and list all of the input parameter values, references citing their source, and in some cases references to more complete descriptions of considerations leading to the selection of values.
Rezaei, Meisam; Seuntjens, Piet; Shahidi, Reihaneh; Joris, Ingeborg; Boënne, Wesley; Cornelis, Wim
2016-04-01
Soil hydraulic parameters, which can be derived from in situ and/or laboratory experiments, are key input parameters for modeling water flow in the vadose zone. In this study, we measured soil hydraulic properties with typical laboratory measurements and field tension infiltration experiments using Wooding's analytical solution and inverse optimization along the vertical direction within two typical podzol profiles with sand texture in a potato field. The objective was to identify proper sets of hydraulic parameters and to evaluate their relevance on hydrological model performance for irrigation management purposes. Tension disc infiltration experiments were carried out at five different depths for both profiles at consecutive negative pressure heads of 12, 6, 3 and 0.1 cm. At the same locations and depths undisturbed samples were taken to determine the water retention curve with hanging water column and pressure extractors and lab saturated hydraulic conductivity with the constant head method. Both approaches allowed to determine the Mualem-van Genuchten (MVG) hydraulic parameters (residual water content θr, saturated water content θs,, shape parameters α and n, and field or lab saturated hydraulic conductivity Kfs and Kls). Results demonstrated horizontal differences and vertical variability of hydraulic properties. Inverse optimization resulted in excellent matches between observed and fitted infiltration rates in combination with final water content at the end of the experiment, θf, using Hydrus 2D/3D. It also resulted in close correspondence of and Kfs with those from Logsdon and Jaynes' (1993) solution of Wooding's equation. The MVG parameters Kfs and α estimated from the inverse solution (θr set to zero), were relatively similar to values from Wooding's solution which were used as initial value and the estimated θs corresponded to (effective) field saturated water content θf. We found the Gardner parameter αG to be related to the optimized van
On linear models and parameter identifiability in experimental biological systems.
Lamberton, Timothy O; Condon, Nicholas D; Stow, Jennifer L; Hamilton, Nicholas A
2014-10-07
A key problem in the biological sciences is to be able to reliably estimate model parameters from experimental data. This is the well-known problem of parameter identifiability. Here, methods are developed for biologists and other modelers to design optimal experiments to ensure parameter identifiability at a structural level. The main results of the paper are to provide a general methodology for extracting parameters of linear models from an experimentally measured scalar function - the transfer function - and a framework for the identifiability analysis of complex model structures using linked models. Linked models are composed by letting the output of one model become the input to another model which is then experimentally measured. The linked model framework is shown to be applicable to designing experiments to identify the measured sub-model and recover the input from the unmeasured sub-model, even in cases that the unmeasured sub-model is not identifiable. Applications for a set of common model features are demonstrated, and the results combined in an example application to a real-world experimental system. These applications emphasize the insight into answering "where to measure" and "which experimental scheme" questions provided by both the parameter extraction methodology and the linked model framework. The aim is to demonstrate the tools' usefulness in guiding experimental design to maximize parameter information obtained, based on the model structure.
Measurement of Laser Weld Temperatures for 3D Model Input.
Energy Technology Data Exchange (ETDEWEB)
Dagel, Daryl; GROSSETETE, GRANT; Maccallum, Danny O.
2016-10-01
Laser welding is a key joining process used extensively in the manufacture and assembly of critical components for several weapons systems. Sandia National Laboratories advances the understanding of the laser welding process through coupled experimentation and modeling. This report summarizes the experimental portion of the research program, which focused on measuring temperatures and thermal history of laser welds on steel plates. To increase confidence in measurement accuracy, researchers utilized multiple complementary techniques to acquire temperatures during laser welding. This data serves as input to and validation of 3D laser welding models aimed at predicting microstructure and the formation of defects and their impact on weld-joint reliability, a crucial step in rapid prototyping of weapons components.
Computation of reduced energy input current stimuli for neuron phase models.
Anyalebechi, Jason; Koelling, Melinda E; Miller, Damon A
2014-01-01
A regularly spiking neuron can be studied using a phase model. The effect of an input stimulus current on the phase time derivative is captured by a phase response curve. This paper adapts a technique that was previously applied to conductance-based models to discover optimal input stimulus currents for phase models. First, the neuron phase response θ(t) due to an input stimulus current i(t) is computed using a phase model. The resulting θ(t) is taken to be a reference phase r(t). Second, an optimal input stimulus current i(*)(t) is computed to minimize a weighted sum of the square-integral `energy' of i(*)(t) and the tracking error between the reference phase r(t) and the phase response due to i(*)(t). The balance between the conflicting requirements of energy and tracking error minimization is controlled by a single parameter. The generated optimal current i(*)t) is then compared to the input current i(t) which was used to generate the reference phase r(t). This technique was applied to two neuron phase models; in each case, the current i(*)(t) generates a phase response similar to the reference phase r(t), and the optimal current i(*)(t) has a lower `energy' than the square-integral of i(t). For constant i(t), the optimal current i(*)(t) need not be constant in time. In fact, i(*)(t) is large (possibly even larger than i(t)) for regions where the phase response curve indicates a stronger sensitivity to the input stimulus current, and smaller in regions of reduced sensitivity.
Roe, Byron
2013-01-01
The effect of correlations between model parameters and nuisance parameters is discussed, in the context of fitting model parameters to data. Modifications to the usual $\\chi^2$ method are required. Fake data studies, as used at present, will not be optimum. Problems will occur for applications of the Maltoni-Schwetz \\cite{ms} theorem. Neutrino oscillations are used as examples, but the problems discussed here are general ones, which are often not addressed.
Phylogenetic mixtures and linear invariants for equal input models.
Casanellas, Marta; Steel, Mike
2017-04-01
The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).
Haller, Julian; Wilkens, Volker
2012-11-01
For power levels up to 200 W and sonication times up to 60 s, the electrical power, the voltage and the electrical impedance (more exactly: the ratio of RMS voltage and RMS current) have been measured for a piezocomposite high intensity therapeutic ultrasound (HITU) transducer with integrated matching network, two piezoceramic HITU transducers with external matching networks and for a passive dummy 50 Ω load. The electrical power and the voltage were measured during high power application with an inline power meter and an RMS voltage meter, respectively, and the complex electrical impedance was indirectly measured with a current probe, a 100:1 voltage probe and a digital scope. The results clearly show that the input RMS voltage and the input RMS power change unequally during the application. Hence, the indication of only the electrical input power or only the voltage as the input parameter may not be sufficient for reliable characterizations of ultrasound transducers for high power applications in some cases.
Kigobe, M.; McIntyre, N.; Wheater, H. S.
2009-04-01
Interest in the application of climate and hydrological models in the Nile basin has risen in the recent past; however, the first drawback for most efforts has been the estimation of historic precipitation patterns. In this study we have applied stochastic models to infill and extend observed data sets to generate inputs for hydrological modelling. Several stochastic climate models within the Generalised Linear Modelling (GLM) framework have been applied to reproduce spatial and temporal patterns of precipitation in the Kyoga basin. A logistic regression model (describing rainfall occurrence) and a gamma distribution (describing rainfall amounts) are used to model rainfall patterns. The parameters of the models are functions of spatial and temporal covariates, and are fitted to the observed rainfall data using log-likelihood methods. Using the fitted model, multi-site rainfall sequences over the Kyoga basin are generated stochastically as a function of the dominant seasonal, climatic and geographic controls. The rainfall sequences generated are then used to drive a semi distributed hydrological model using the Soil Water and Assessment Tool (SWAT). The sensitivity of runoff to uncertainty associated with missing precipitation records is thus tested. In an application to the Lake Kyoga catchment, the performance of the hydrological model highly depends on the spatial representation of the input precipitation patterns, model parameterisation and the performance of the GLM stochastic models used to generate the input rainfall. The results obtained so far disclose that stochastic models can be developed for several climatic regions within the Kyoga basin; and, given identification of a stochastic rainfall model; input uncertainty due to precipitation can be usefully quantified. The ways forward for rainfall modelling and hydrological simulation in Uganda and the Upper Nile are discussed. Key Words: Precipitation, Generalised Linear Models, Input Uncertainty, Soil Water
Marco eGanzetti; Nicole eWenderoth; Dante eMantini
2016-01-01
Intensity non-uniformity (INU) in magnetic resonance (MR) imaging is a major issue when conducting analyses of brain structural properties. An inaccurate INU correction may result in qualitative and quantitative misinterpretations. Several INU correction methods exist, whose performance largely depend on the specific parameter settings that need to be chosen by the user. Here we addressed the question of how to select the best input parameters for a specific INU correction algorithm. Our inve...
Lin, I-Chun; Xing, Dajun; Shapley, Robert
2012-12-01
One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.
Institute of Scientific and Technical Information of China (English)
Yanlin He; Yuan Xu; Zhiqiang Geng; Qunxiong Zhu
2015-01-01
To explore the problems of monitoring chemical processes with large numbers of input parameters, a method based on Auto-associative Hierarchical Neural Network (AHNN) is proposed. AHNN focuses on dealing with datasets in high-dimension. AHNNs consist of two parts:groups of subnets based on well trained Auto-associative Neural Networks (AANNs) and a main net. The subnets play an important role on the performance of AHNN. A simple but effective method of designing the subnets is developed in this paper. In this method, the subnets are designed according to the classification of the data attributes. For getting the classification, an effective method called Extension Data Attributes Classification (EDAC) is adopted. Soft sensor using AHNN based on EDAC (EDAC-AHNN) is introduced. As a case study, the production data of Purified Terephthalic Acid (PTA) solvent system are selected to examine the proposed model. The results of the EDAC-AHNN model are compared with the experimental data extracted from the literature, which shows the efficiency of the proposed model.
Directory of Open Access Journals (Sweden)
Cheng Wang
2014-01-01
Full Text Available The identification of a class of linear-in-parameters multiple-input single-output systems is considered. By using the iterative search, a least-squares based iterative algorithm and a gradient based iterative algorithm are proposed. A nonlinear example is used to verify the effectiveness of the algorithms, and the simulation results show that the least-squares based iterative algorithm can produce more accurate parameter estimates than the gradient based iterative algorithm.
Modelling Analysis of Forestry Input-Output Elasticity in China
Directory of Open Access Journals (Sweden)
Guofeng Wang
2016-01-01
Full Text Available Based on an extended economic model and space econometrics, this essay analyzed the spatial distributions and interdependent relationships of the production of forestry in China; also the input-output elasticity of forestry production were calculated. Results figure out there exists significant spatial correlation in forestry production in China. Spatial distribution is mainly manifested as spatial agglomeration. The output elasticity of labor force is equal to 0.6649, and that of capital is equal to 0.8412. The contribution of land is significantly negative. Labor and capital are the main determinants for the province-level forestry production in China. Thus, research on the province-level forestry production should not ignore the spatial effect. The policy-making process should take into consideration the effects between provinces on the production of forestry. This study provides some scientific technical support for forestry production.
Physics input for modelling superfluid neutron stars with hyperon cores
Gusakov, M E; Kantor, E M
2014-01-01
Observations of massive ($M \\approx 2.0~M_\\odot$) neutron stars (NSs), PSRs J1614-2230 and J0348+0432, rule out most of the models of nucleon-hyperon matter employed in NS simulations. Here we construct three possible models of nucleon-hyperon matter consistent with the existence of $2~M_\\odot$ pulsars as well as with semi-empirical nuclear matter parameters at saturation, and semi-empirical hypernuclear data. Our aim is to calculate for these models all the parameters necessary for modelling dynamics of hyperon stars (such as equation of state, adiabatic indices, thermodynamic derivatives, relativistic entrainment matrix, etc.), making them available for a potential user. To this aim a general non-linear hadronic Lagrangian involving $\\sigma\\omega\\rho\\phi\\sigma^\\ast$ meson fields, as well as quartic terms in vector-meson fields, is considered. A universal scheme for calculation of the $\\ell=0,1$ Landau Fermi-liquid parameters and relativistic entrainment matrix is formulated in the mean-field approximation. ...
Energy Technology Data Exchange (ETDEWEB)
Roudeau, P.; Stocchi, A. [Laboratoire de l' Accelerateur Lineaire, 91 - Orsay (France); Ciuchini, M.; Lubicz, V. [Rome Univ., INFN (Italy); D' Agostini, G.; Franco, E.; Martinelli, G. [Rome Univ. La Sapienza and Sezione INFN, (Italy); Parodi, F. [Universita di Genova and INFN, Dipt. di Fisica (Italy)
2000-12-01
Within the Standard Model, a review of the current determination of the sides and angles of the CKM unitarity triangle is presented, using experimental constraints from the measurements of |{epsilon}{sub K}|, |V{sub ub}/V{sub cb}|, {delta}m{sub d} and from the limit on {delta}m{sub s}, available in September 2000. Results from the experimental search for B{sup 0}{sub s}-B-bar{sup 0}{sub s} oscillations are introduced in the present analysis using the likelihood. Special attention is devoted to the determination of the theoretical uncertainties. The purpose of the analysis is to infer regions where the parameters of interest lie with given probabilities. The BaBar '95% C.L. scanning' method is also commented. (authors)
Institute of Scientific and Technical Information of China (English)
王永龙; 潘毅群
2014-01-01
The calibration methods and procedures of building energy simulation are briefly summa-rized, as well as the application and effect of sensitivity analysis during calibration. The software e-QUEST3-64 is utilized to establish an energy model of prototypical office building. A series of single factor sensitivity analysis is obtained about these input parameters from the envelope of the structure, internal loads and HVAC systems. Through analyzing the results of dynamic simulation for the whole year, the lev-els of sensitivity for all input parameters are compared, not only to provide a basis for calibration of exist-ing office building energy simulation, but also to point out the emphasis of energy-saving designs of new buildings and retrofitting of existing buildings by selecting the parameters which are the most significant and suitable.%简要概括了建筑能耗模拟的校验方法与步骤，以及敏感性分析在校验模拟中的应用方法和意义。办公建筑的建筑和设备系统模型采用eQUEST3-64建立，单因子敏感性分析涉及3个方面的模型输入参数，包括建筑围护结构参数、建筑内部负荷参数以及空调系统参数。通过全年的动态模拟结果分析，指出和比较各个输入参数的敏感性大小，为已有办公类型建筑能耗模型的校验提供依据，也相应的指出了新建建筑节能设计或既有建筑节能改造的重点，为建筑节能设计和节能改造所需参数的选取提供依据。
Modeling the impact of common noise inputs on the network activity of retinal ganglion cells.
Vidne, Michael; Ahmadian, Yashar; Shlens, Jonathon; Pillow, Jonathan W; Kulkarni, Jayant; Litke, Alan M; Chichilnisky, E J; Simoncelli, Eero; Paninski, Liam
2012-08-01
Synchronized spontaneous firing among retinal ganglion cells (RGCs), on timescales faster than visual responses, has been reported in many studies. Two candidate mechanisms of synchronized firing include direct coupling and shared noisy inputs. In neighboring parasol cells of primate retina, which exhibit rapid synchronized firing that has been studied extensively, recent experimental work indicates that direct electrical or synaptic coupling is weak, but shared synaptic input in the absence of modulated stimuli is strong. However, previous modeling efforts have not accounted for this aspect of firing in the parasol cell population. Here we develop a new model that incorporates the effects of common noise, and apply it to analyze the light responses and synchronized firing of a large, densely-sampled network of over 250 simultaneously recorded parasol cells. We use a generalized linear model in which the spike rate in each cell is determined by the linear combination of the spatio-temporally filtered visual input, the temporally filtered prior spikes of that cell, and unobserved sources representing common noise. The model accurately captures the statistical structure of the spike trains and the encoding of the visual stimulus, without the direct coupling assumption present in previous modeling work. Finally, we examined the problem of decoding the visual stimulus from the spike train given the estimated parameters. The common-noise model produces Bayesian decoding performance as accurate as that of a model with direct coupling, but with significantly more robustness to spike timing perturbations.
Input--output capital coefficients for energy technologies. [Input-output model
Energy Technology Data Exchange (ETDEWEB)
Tessmer, R.G. Jr.
1976-12-01
Input-output capital coefficients are presented for five electric and seven non-electric energy technologies. They describe the durable goods and structures purchases (at a 110 sector level of detail) that are necessary to expand productive capacity in each of twelve energy source sectors. Coefficients are defined in terms of 1967 dollar purchases per 10/sup 6/ Btu of output from new capacity, and original data sources include Battelle Memorial Institute, the Harvard Economic Research Project, The Mitre Corp., and Bechtel Corp. The twelve energy sectors are coal, crude oil and gas, shale oil, methane from coal, solvent refined coal, refined oil products, pipeline gas, coal combined-cycle electric, fossil electric, LWR electric, HTGR electric, and hydroelectric.
Sin, Gürkan; Gernaey, Krist V; Lantz, Anna Eliasson
2009-01-01
The uncertainty and sensitivity analysis are evaluated for their usefulness as part of the model-building within Process Analytical Technology applications. A mechanistic model describing a batch cultivation of Streptomyces coelicolor for antibiotic production was used as case study. The input uncertainty resulting from assumptions of the model was propagated using the Monte Carlo procedure to estimate the output uncertainty. The results showed that significant uncertainty exists in the model outputs. Moreover the uncertainty in the biomass, glucose, ammonium and base-consumption were found low compared to the large uncertainty observed in the antibiotic and off-gas CO(2) predictions. The output uncertainty was observed to be lower during the exponential growth phase, while higher in the stationary and death phases - meaning the model describes some periods better than others. To understand which input parameters are responsible for the output uncertainty, three sensitivity methods (Standardized Regression Coefficients, Morris and differential analysis) were evaluated and compared. The results from these methods were mostly in agreement with each other and revealed that only few parameters (about 10) out of a total 56 were mainly responsible for the output uncertainty. Among these significant parameters, one finds parameters related to fermentation characteristics such as biomass metabolism, chemical equilibria and mass-transfer. Overall the uncertainty and sensitivity analysis are found promising for helping to build reliable mechanistic models and to interpret the model outputs properly. These tools make part of good modeling practice, which can contribute to successful PAT applications for increased process understanding, operation and control purposes.
Di Luzio, Mauro; Arnold, Jeffrey G.
2004-10-01
This paper describes the background, formulation and results of an hourly input-output calibration approach proposed for the Soil and Water Assessment Tool (SWAT) watershed model, presented for 24 representative storm events occurring during the period between 1994 and 2000 in the Blue River watershed (1233 km 2 located in Oklahoma). This effort is the first follow up to the participation in the National Weather Service-Distributed Modeling Intercomparison Project (DMIP), an opportunity to apply, for the first time within the SWAT modeling framework, routines for hourly stream flow prediction based on gridded precipitation (NEXRAD) data input. Previous SWAT model simulations, uncalibrated and with moderate manual calibration (only the water balance over the calibration period), were provided for the entire set of watersheds and associated outlets for the comparison designed in the DMIP project. The extended goal of this follow up was to verify the model efficiency in simulating hourly hydrographs calibrating each storm event using the formulated approach. This included a combination of a manual and an automatic calibration approach (Shuffled Complex Evolution Method) and the use of input parameter values allowed to vary only within their physical extent. While the model provided reasonable water budget results with minimal calibration, event simulations with the revised calibration were significantly improved. The combination of NEXRAD precipitation data input, the soil water balance and runoff equations, along with the calibration strategy described in the paper, appear to adequately describe the storm events. The presented application and the formulated calibration method are initial steps toward the improvement of the simulation on an hourly basis of the SWAT model loading variables associated with the storm flow, such as sediment and pollutants, and the success of Total Maximum Daily Load (TMDL) projects.
Riedl, M.; Suhrbier, A.; Malberg, H.; Penzel, T.; Bretthauer, G.; Kurths, J.; Wessel, N.
2008-07-01
The parameters of heart rate variability and blood pressure variability have proved to be useful analytical tools in cardiovascular physics and medicine. Model-based analysis of these variabilities additionally leads to new prognostic information about mechanisms behind regulations in the cardiovascular system. In this paper, we analyze the complex interaction between heart rate, systolic blood pressure, and respiration by nonparametric fitted nonlinear additive autoregressive models with external inputs. Therefore, we consider measurements of healthy persons and patients suffering from obstructive sleep apnea syndrome (OSAS), with and without hypertension. It is shown that the proposed nonlinear models are capable of describing short-term fluctuations in heart rate as well as systolic blood pressure significantly better than similar linear ones, which confirms the assumption of nonlinear controlled heart rate and blood pressure. Furthermore, the comparison of the nonlinear and linear approaches reveals that the heart rate and blood pressure variability in healthy subjects is caused by a higher level of noise as well as nonlinearity than in patients suffering from OSAS. The residue analysis points at a further source of heart rate and blood pressure variability in healthy subjects, in addition to heart rate, systolic blood pressure, and respiration. Comparison of the nonlinear models within and among the different groups of subjects suggests the ability to discriminate the cohorts that could lead to a stratification of hypertension risk in OSAS patients.
Solar Model Parameters and Direct Measurements of Solar Neutrino Fluxes
Bandyopadhyay, A; Goswami, S; Petcov, S T; Bandyopadhyay, Abhijit; Choubey, Sandhya; Goswami, Srubabati
2006-01-01
We explore a novel possibility of determining the solar model parameters, which serve as input in the calculations of the solar neutrino fluxes, by exploiting the data from direct measurements of the fluxes. More specifically, we use the rather precise value of the $^8B$ neutrino flux, $\\phi_B$ obtained from the global analysis of the solar neutrino and KamLAND data, to derive constraints on each of the solar model parameters on which $\\phi_B$ depends. We also use more precise values of $^7Be$ and $pp$ fluxes as can be obtained from future prospective data and discuss whether such measurements can help in reducing the uncertainties of one or more input parameters of the Standard Solar Model.
Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel
2015-01-01
The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available.
Surrogate based approaches to parameter inference in ocean models
Knio, Omar
2016-01-06
This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.
Yoon, Sangpil; Wang, Yingxiao; Shung, K. K.
2016-03-01
Acoustic-transfection technique has been developed for the first time. We have developed acoustic-transfection by integrating a high frequency ultrasonic transducer and a fluorescence microscope. High frequency ultrasound with the center frequency over 150 MHz can focus acoustic sound field into a confined area with the diameter of 10 μm or less. This focusing capability was used to perturb lipid bilayer of cell membrane to induce intracellular delivery of macromolecules. Single cell level imaging was performed to investigate the behavior of a targeted single-cell after acoustic-transfection. FRET-based Ca2+ biosensor was used to monitor intracellular concentration of Ca2+ after acoustic-transfection and the fluorescence intensity of propidium iodide (PI) was used to observe influx of PI molecules. We changed peak-to-peak voltages and pulse duration to optimize the input parameters of an acoustic pulse. Input parameters that can induce strong perturbations on cell membrane were found and size dependent intracellular delivery of macromolecules was explored. To increase the amount of delivered molecules by acoustic-transfection, we applied several acoustic pulses and the intensity of PI fluorescence increased step wise. Finally, optimized input parameters of acoustic-transfection system were used to deliver pMax-E2F1 plasmid and GFP expression 24 hours after the intracellular delivery was confirmed using HeLa cells.
The stability of input structures in a supply-driven input-output model: A regional analysis
Energy Technology Data Exchange (ETDEWEB)
Allison, T.
1994-06-01
Disruptions in the supply of strategic resources or other crucial factor inputs often present significant problems for planners and policymakers. The problem may be particularly significant at the regional level where higher levels of product specialization mean supply restrictions are more likely to affect leading regional industries. To maintain economic stability in the event of a supply restriction, regional planners may therefore need to evaluate the importance of market versus non-market systems for allocating the remaining supply of the disrupted resource to the region`s leading consuming industries. This paper reports on research that has attempted to show that large short term changes on the supply side do not lead to substantial changes in input coefficients and do not therefore mean the abandonment of the concept of the production function as has been suggested (Oosterhaven, 1988). The supply-driven model was tested for six sectors of the economy of Washington State and found to yield new input coefficients whose values were in most cases close approximations of their original values, even with substantial changes in supply. Average coefficient changes from a 50% output reduction in these six sectors were in the vast majority of cases (297 from a total of 315) less than +2.0% of their original values, excluding coefficient changes for the restricted input. Given these small changes, the most important issue for the validity of the supply-driven input-output model may therefore be the empirical question of the extent to which these coefficient changes are acceptable as being within the limits of approximation.
Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network.
Ponzi, Adam; Wickens, Jeff
2012-01-01
The striatal medium spiny neuron (MSN) network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri-stimulus time histograms (PSTH) of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioral task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviorally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and outline the range of parameters where this behavior is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response which could be utilized by the animal in behavior.
Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network
Directory of Open Access Journals (Sweden)
Adam ePonzi
2012-03-01
Full Text Available The striatal medium spiny neuron (MSNs network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri stimulus time histograms (PSTH of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioural task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviourally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would in when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and delineate the range of parameters where this behaviour is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response
Analyzing the sensitivity of a flood risk assessment model towards its input data
Glas, Hanne; Deruyter, Greet; De Maeyer, Philippe; Mandal, Arpita; James-Williamson, Sherene
2016-11-01
The Small Island Developing States are characterized by an unstable economy and low-lying, densely populated cities, resulting in a high vulnerability to natural hazards. Flooding affects more people than any other hazard. To limit the consequences of these hazards, adequate risk assessments are indispensable. Satisfactory input data for these assessments are hard to acquire, especially in developing countries. Therefore, in this study, a methodology was developed and evaluated to test the sensitivity of a flood model towards its input data in order to determine a minimum set of indispensable data. In a first step, a flood damage assessment model was created for the case study of Annotto Bay, Jamaica. This model generates a damage map for the region based on the flood extent map of the 2001 inundations caused by Tropical Storm Michelle. Three damages were taken into account: building, road and crop damage. Twelve scenarios were generated, each with a different combination of input data, testing one of the three damage calculations for its sensitivity. One main conclusion was that population density, in combination with an average number of people per household, is a good parameter in determining the building damage when exact building locations are unknown. Furthermore, the importance of roads for an accurate visual result was demonstrated.
Determination of growth rates as an input of the stock discount valuation models
Directory of Open Access Journals (Sweden)
Momčilović Mirela
2013-01-01
Full Text Available When determining the value of the stocks with different stock discount valuation models, one of the important inputs is expected growth rate of dividends, earnings, cash flows and other relevant parameters of the company. Growth rate can be determined by three basic ways, and those are: on the basis of extrapolation of historical data, on the basis of professional assessment of the analytics who follow business of the company and on the basis of fundamental indicators of the company. Aim of this paper is to depict theoretical basis and practical application of stated methods for growth rate determination, and to indicate their advantages, or deficiencies.
DEFF Research Database (Denmark)
Olesen, Bjarne W.
2015-01-01
The first international standard that dealtwith all indoor environmental parameters (thermal comfort, air quality, lightingand acoustic) was published in 2007 asEN15251. This standard prescribed inputparameters for design and assessment ofenergy performance of buildings and was apart of the set...
Regionalization of SWAT Model Parameters for Use in Ungauged Watersheds
Directory of Open Access Journals (Sweden)
Indrajeet Chaubey
2010-11-01
Full Text Available There has been a steady shift towards modeling and model-based approaches as primary methods of assessing watershed response to hydrologic inputs and land management, and of quantifying watershed-wide best management practice (BMP effectiveness. Watershed models often require some degree of calibration and validation to achieve adequate watershed and therefore BMP representation. This is, however, only possible for gauged watersheds. There are many watersheds for which there are very little or no monitoring data available, thus the question as to whether it would be possible to extend and/or generalize model parameters obtained through calibration of gauged watersheds to ungauged watersheds within the same region. This study explored the possibility of developing regionalized model parameter sets for use in ungauged watersheds. The study evaluated two regionalization methods: global averaging, and regression-based parameters, on the SWAT model using data from priority watersheds in Arkansas. Resulting parameters were tested and model performance determined on three gauged watersheds. Nash-Sutcliffe efficiencies (NS for stream flow obtained using regression-based parameters (0.53–0.83 compared well with corresponding values obtained through model calibration (0.45–0.90. Model performance obtained using global averaged parameter values was also generally acceptable (0.4 ≤ NS ≤ 0.75. Results from this study indicate that regionalized parameter sets for the SWAT model can be obtained and used for making satisfactory hydrologic response predictions in ungauged watersheds.
Modelling groundwater discharge areas using only digital elevation models as input data
Energy Technology Data Exchange (ETDEWEB)
Brydsten, Lars [Umeaa Univ. (Sweden). Dept. of Biology and Environmental Science
2006-10-15
Advanced geohydrological models require data on topography, soil distribution in three dimensions, vegetation, land use, bedrock fracture zones. To model present geohydrological conditions, these factors can be gathered with different techniques. If a future geohydrological condition is modelled in an area with positive shore displacement (say 5,000 or 10,000 years), some of these factors can be difficult to measure. This could include the development of wetlands and the filling of lakes. If the goal of the model is to predict distribution of groundwater recharge and discharge areas in the landscape, the most important factor is topography. The question is how much can topography alone explain the distribution of geohydrological objects in the landscape. A simplified description of the distribution of geohydrological objects in the landscape is that groundwater recharge areas occur at local elevation curvatures and discharge occurs in lakes, brooks, and low situated slopes. Areas in-between these make up discharge areas during wet periods and recharge areas during dry periods. A model that could predict this pattern only using topography data needs to be able to predict high ridges and future lakes and brooks. This study uses GIS software with four different functions using digital elevation models as input data, geomorphometrical parameters to predict landscape ridges, basin fill for predicting lakes, flow accumulations for predicting future waterways, and topographical wetness indexes for dividing in-between areas based on degree of wetness. An area between the village of and Forsmarks' Nuclear Power Plant has been used to calibrate the model. The area is within the SKB 10-metre Elevation Model (DEM) and has a high-resolution orienteering map for wetlands. Wetlands are assumed to be groundwater discharge areas. Five hundred points were randomly distributed across the wetlands. These are potential discharge points. Model parameters were chosen with the
PARAMETER ESTIMATION OF ENGINEERING TURBULENCE MODEL
Institute of Scientific and Technical Information of China (English)
钱炜祺; 蔡金狮
2001-01-01
A parameter estimation algorithm is introduced and used to determine the parameters in the standard k-ε two equation turbulence model (SKE). It can be found from the estimation results that although the parameter estimation method is an effective method to determine model parameters, it is difficult to obtain a set of parameters for SKE to suit all kinds of separated flow and a modification of the turbulence model structure should be considered. So, a new nonlinear k-ε two-equation model (NNKE) is put forward in this paper and the corresponding parameter estimation technique is applied to determine the model parameters. By implementing the NNKE to solve some engineering turbulent flows, it is shown that NNKE is more accurate and versatile than SKE. Thus, the success of NNKE implies that the parameter estimation technique may have a bright prospect in engineering turbulence model research.
Wage Differentials among Workers in Input-Output Models.
Filippini, Luigi
1981-01-01
Using an input-output framework, the author derives hypotheses on wage differentials based on the assumption that human capital (in this case, education) will explain workers' wage differentials. The hypothetical wage differentials are tested on data from the Italian economy. (RW)
Ahn, Sungwoo; Zauber, S. Elizabeth; Worth, Robert M.; Rubchinsky, Leonid L.
2016-01-01
Hypokinetic symptoms of Parkinson's disease are usually associated with excessively strong oscillations and synchrony in the beta frequency band. The origin of this synchronized oscillatory dynamics is being debated. Cortical circuits may be a critical source of excessive beta in Parkinson's disease. However, subthalamo-pallidal circuits were also suggested to be a substantial component in generation and/or maintenance of Parkinsonian beta activity. Here we study how the subthalamo-pallidal circuits interact with input signals in the beta frequency band, representing cortical input. We use conductance-based models of the subthalamo-pallidal network and two types of input signals: artificially-generated inputs and input signals obtained from recordings in Parkinsonian patients. The resulting model network dynamics is compared with the dynamics of the experimental recordings from patient's basal ganglia. Our results indicate that the subthalamo-pallidal model network exhibits multiple resonances in response to inputs in the beta band. For a relatively broad range of network parameters, there is always a certain input strength, which will induce patterns of synchrony similar to the experimentally observed ones. This ability of the subthalamo-pallidal network to exhibit realistic patterns of synchronous oscillatory activity under broad conditions may indicate that these basal ganglia circuits are directly involved in the expression of Parkinsonian synchronized beta oscillations. Thus, Parkinsonian synchronized beta oscillations may be promoted by the simultaneous action of both cortical (or some other) and subthalamo-pallidal network mechanisms. Hence, these mechanisms are not necessarily mutually exclusive. PMID:28066222
X-Parameter Based Modelling of Polar Modulated Power Amplifiers
DEFF Research Database (Denmark)
Wang, Yelin; Nielsen, Troels Studsgaard; Sira, Daniel
2013-01-01
X-parameters are developed as an extension of S-parameters capable of modelling non-linear devices driven by large signals. They are suitable for devices having only radio frequency (RF) and DC ports. In a polar power amplifier (PA), phase and envelope of the input modulated signal are applied...... at separate ports and the envelope port is neither an RF nor a DC port. As a result, X-parameters may fail to characterise the effect of the envelope port excitation and consequently the polar PA. This study introduces a solution to the problem for a commercial polar PA. In this solution, the RF-phase path...
High Temperature Test Facility Preliminary RELAP5-3D Input Model Description
Energy Technology Data Exchange (ETDEWEB)
Bayless, Paul David [Idaho National Laboratory
2015-12-01
A RELAP5-3D input model is being developed for the High Temperature Test Facility at Oregon State University. The current model is described in detail. Further refinements will be made to the model as final as-built drawings are released and when system characterization data are available for benchmarking the input model.
Standard Model evaluation of $\\varepsilon_K$ using lattice QCD inputs for $\\hat{B}_K$ and $V_{cb}$
Bailey, Jon A; Lee, Weonjong; Park, Sungwoo
2015-01-01
We report the Standard Model evaluation of the indirect CP violation parameter $\\varepsilon_K$ using inputs from lattice QCD: the kaon bag parameter $\\hat{B}_K$, $\\xi_0$, $|V_{us}|$ from the $K_{\\ell 3}$ and $K_{\\mu 2}$ decays, and $|V_{cb}|$ from the axial current form factor for the exclusive decay $\\bar{B} \\to D^* \\ell \\bar{\
Boedeker, Kirsten L.
The purpose of this work is to investigate and quantify the effects of technical parameter variability and reconstruction algorithm on image quality and object detectability. To accomplish this, metrics of both noise and signal to noise ratio (SNR) are explored and then applied in object detection tasks using a computer aided diagnosis (CAD) system. The noise power spectrum (NPS) is investigated as a noise metric in that it describes both the magnitude of noise and the spatial characteristics of noise that are introduced by the reconstruction algorithm. The NPS was found to be much more robust than the conventional standard deviation metric. The noise equivalent quanta (NEQ) is also studied as a tool for comparing effects of acquisition parameters (esp. mAs) on noise and, as NEQ is not influenced by reconstruction filter or other post-processing, its utility for comparison across different techniques and manufacturers is demonstrated. The Ideal Bayesian Observer (IBO) and Non-Prewhitening Matched Filter (NPWMF) are investigated as SNR metrics under a variety of acquisition and reconstruction conditions. The signal and noise processes of image formation were studied individually, which allowed for analysis of their separate effects on the overall SNR. The SNR metrics were found to characterize the influence of reconstruction filter and technical parameter variability with high sensitivity. To correlate the above SNR metrics with detection, signal images were combined with noise images and passed to a CAD system. A simulated lung nodule detection task was performed on a series of objects of increasing contrast. The average minimum contrast detected and corresponding IBO and NPWMF SNR values were recorded over 100 trials for each reconstruction filter and technical parameter condition studied. Among the trends discovered, it was found that detectability scales with SNR as mAs is varied. Furthermore, the CAD system appears to under-perform when sharp algorithms are
Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs
Energy Technology Data Exchange (ETDEWEB)
Woods, J.; Winkler, J.; Christensen, D.; Hancock, E.
2014-08-01
Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputs for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.
Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs
Energy Technology Data Exchange (ETDEWEB)
Woods, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Winkler, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Christensen, D. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Hancock, E. [Mountain Energy Partnership, Longmont, CO (United States)
2014-08-01
Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputs for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.
Parameter estimation in stochastic rainfall-runoff models
DEFF Research Database (Denmark)
Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur
2006-01-01
the parameters, including the noise terms. The parameter estimation method is a maximum likelihood method (ML) where the likelihood function is evaluated using a Kalman filter technique. The ML method estimates the parameters in a prediction error settings, i.e. the sum of squared prediction error is minimized....... For a comparison the parameters are also estimated by an output error method, where the sum of squared simulation error is minimized. The former methodology is optimal for short-term prediction whereas the latter is optimal for simulations. Hence, depending on the purpose it is possible to select whether...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...
Directory of Open Access Journals (Sweden)
Marco eGanzetti
2016-03-01
Full Text Available Intensity non-uniformity (INU in magnetic resonance (MR imaging is a major issue when conducting analyses of brain structural properties. An inaccurate INU correction may result in qualitative and quantitative misinterpretations. Several INU correction methods exist, whose performance largely depend on the specific parameter settings that need to be chosen by the user. Here we addressed the question of how to select the best input parameters for a specific INU correction algorithm. Our investigation was based on the INU correction algorithm implemented in SPM, but this can be in principle extended to any other algorithm requiring the selection of input parameters. We conducted a comprehensive comparison of indirect metrics for the assessment of INU correction performance, namely the coefficient of variation of white matter (CV_WM, the coefficient of variation of gray matter (CV_GM, and the coefficient of joint variation between white matter and gray matter (CJV. Using simulated MR data, we observed the CJV to be more accurate than CV_WM and CV_GM, provided that the noise level in the INU-corrected image was controlled by means of spatial smoothing. Based on the CJV, we developed a data-driven approach for selecting INU correction parameters, which could effectively work on actual MR images. To this end, we implemented an enhanced procedure for the definition of white and gray matter masks, based on which the CJV was calculated. Our approach was validated using actual T1-weighted images collected with 1.5 T, 3 T and 7 T MR scanners. We found that our procedure can reliably assist the selection of valid INU correction algorithm parameters, thereby contributing to an enhanced inhomogeneity correction in MR images.
Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante
2016-01-01
Intensity non-uniformity (INU) in magnetic resonance (MR) imaging is a major issue when conducting analyses of brain structural properties. An inaccurate INU correction may result in qualitative and quantitative misinterpretations. Several INU correction methods exist, whose performance largely depend on the specific parameter settings that need to be chosen by the user. Here we addressed the question of how to select the best input parameters for a specific INU correction algorithm. Our investigation was based on the INU correction algorithm implemented in SPM, but this can be in principle extended to any other algorithm requiring the selection of input parameters. We conducted a comprehensive comparison of indirect metrics for the assessment of INU correction performance, namely the coefficient of variation of white matter (CVWM), the coefficient of variation of gray matter (CVGM), and the coefficient of joint variation between white matter and gray matter (CJV). Using simulated MR data, we observed the CJV to be more accurate than CVWM and CVGM, provided that the noise level in the INU-corrected image was controlled by means of spatial smoothing. Based on the CJV, we developed a data-driven approach for selecting INU correction parameters, which could effectively work on actual MR images. To this end, we implemented an enhanced procedure for the definition of white and gray matter masks, based on which the CJV was calculated. Our approach was validated using actual T1-weighted images collected with 1.5 T, 3 T, and 7 T MR scanners. We found that our procedure can reliably assist the selection of valid INU correction algorithm parameters, thereby contributing to an enhanced inhomogeneity correction in MR images.
Energy Technology Data Exchange (ETDEWEB)
Rhee, I. H.; Cho, D.; Youn, S. H.; Kim, H. S.; Lee, S. J.; Ahn, H. K. [Soonchunhyang University, Ahsan (Korea)
2002-04-01
This research is to develop a standard methodology for determining the input parameters that impose a substantial impact on radiation doses of residential individuals in the vicinity of four nuclear power plants in Korea. We have selected critical nuclei, pathways and organs related to the human exposure via simulated estimation with K-DOSE 60 based on the updated ICRP-60 and sensitivity analyses. From the results, we found that 1) the critical nuclides were found to be {sup 3}H, {sup 133}Xe, {sup 60}Co for Kori plants and {sup 14}C, {sup 41}Ar for Wolsong plants. The most critical pathway was 'vegetable intake' for adults and 'milk intake' for infants. However, there was no preference in the effective organs, and 2) sensitivity analyses showed that the chemical composition in a nuclide much more influenced upon the radiation dose than any other input parameters such as food intake, radiation discharge, and transfer/concentration coefficients by more than 102 factor. The effect of transfer/concentration coefficients on the radiation dose was negligible. All input parameters showed highly estimated correlation with the radiation dose, approximated to 1.0, except for food intake in Wolsong power plant (partial correlation coefficient (PCC)=0.877). Consequently, we suggest that a prediction model or scenarios for food intake reflecting the current living trend and a formal publications including details of chemical components in the critical nuclei from each plant are needed. Also, standardized domestic values of the parameters used in the calculation must replace the values of the existed or default-set imported factors via properly designed experiments and/or modelling such as transport of liquid discharge in waters nearby the plants, exposure tests on corps and plants so on. 4 figs., 576 tabs. (Author)
Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions
Tsaur, Ruey-Chyn
2015-02-01
In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean-standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.
Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...
Directory of Open Access Journals (Sweden)
Wiktor Jakowluk
2014-11-01
Full Text Available System identification, in practice, is carried out by perturbing processes or plants under operation. That is why in many industrial applications a plant-friendly input signal would be preferred for system identification. The goal of the study is to design the optimal input signal which is then employed in the identification experiment and to examine the relationships between the index of friendliness of this input signal and the accuracy of parameter estimation when the measured output signal is significantly affected by noise. In this case, the objective function was formulated through maximisation of the Fisher information matrix determinant (D-optimality expressed in conventional Bolza form. As setting such conditions of the identification experiment we can only talk about the D-suboptimality, we quantify the plant trajectories using the D-efficiency measure. An additional constraint, imposed on D-efficiency of the solution, should allow one to attain the most adequate information content from the plant which operating point is perturbed in the least invasive (most friendly way. A simple numerical example, which clearly demonstrates the idea presented in the paper, is included and discussed.
Isotope parameters (δD, δ18O) and sources of freshwater input to Kara Sea
Dubinina, E. O.; Kossova, S. A.; Miroshnikov, A. Yu.; Fyaizullina, R. V.
2017-01-01
The isotope characteristics (δD, δ18O) of Kara Sea water were studied for quantitative estimation of freshwater runoff at stations located along transect from Yamal Peninsula to Blagopoluchiya Bay (Novaya Zemlya). Freshwater samples were studied for glaciers (Rose, Serp i Molot) and for Yenisei and Ob estuaries. As a whole, δD and δ18O are higher in glaciers than in river waters. isotope composition of estuarial water from Ob River is δD =-131.4 and δ18O =-17.6‰. Estuarial waters of Yenisei River are characterized by compositions close to those of Ob River (-134.4 and-17.7‰), as well as by isotopically "heavier" compositions (-120.7 and-15.8‰). Waters from studied section of Kara Sea can be product of mixing of freshwater (δD =-119.4, δ18O =-15.5) and seawater (S = 34.9, δD = +1.56, δ18O = +0.25) with a composition close to that of Barents Sea water. isotope parameters of water vary significantly with salinity in surface layer, and Kara Sea waters are desalinated along entire studied transect due to river runoff. concentration of freshwater is 5-10% in main part of water column, and 100 m. maximum contribution of freshwater (>65%) was recorded in surface layer of central part of sea.
Propagation channel characterization, parameter estimation, and modeling for wireless communications
Yin, Xuefeng
2016-01-01
Thoroughly covering channel characteristics and parameters, this book provides the knowledge needed to design various wireless systems, such as cellular communication systems, RFID and ad hoc wireless communication systems. It gives a detailed introduction to aspects of channels before presenting the novel estimation and modelling techniques which can be used to achieve accurate models. To systematically guide readers through the topic, the book is organised in three distinct parts. The first part covers the fundamentals of the characterization of propagation channels, including the conventional single-input single-output (SISO) propagation channel characterization as well as its extension to multiple-input multiple-output (MIMO) cases. Part two focuses on channel measurements and channel data post-processing. Wideband channel measurements are introduced, including the equipment, technology and advantages and disadvantages of different data acquisition schemes. The channel parameter estimation methods are ...
Analysis of the Model Checkers' Input Languages for Modeling Traffic Light Systems
Directory of Open Access Journals (Sweden)
Pathiah A. Samat
2011-01-01
Full Text Available Problem statement: Model checking is an automated verification technique that can be used for verifying properties of a system. A number of model checking systems have been developed over the last few years. However, there is no guideline that is available for selecting the most suitable model checker to be used to model a particular system. Approach: In this study, we compare the use of four model checkers: SMV, SPIN, UPPAAL and PRISM for modeling a distributed control system. In particular, we are looking at the capabilities of the input languages of these model checkers for modeling this type of system. Limitations and differences of their input language are compared and analyses by using a set of questions. Results: The result of the study shows that although the input languages of these model checkers have a lot of similarities, they also have a significant number of differences. The result of the study also shows that one model checker may be more suitable than others for verifying this type of systems Conclusion: User need to choose the right model checker for the problem to be verified.
Hoekstra, Henk; Herbonnet, Ricardo
2016-01-01
Improvements in the accuracy of shape measurements are essential to exploit the statistical power of planned imaging surveys that aim to constrain cosmological parameters using weak lensing by large-scale structure. Although a range of tests can be performed using the measurements, the performance of the algorithm can only be quantified using simulated images. This yields, however, only meaningful results if the simulated images resemble the real observations sufficiently well. In this paper we explore the sensitivity of the multiplicative bias to the input parameters of Euclid-like image simulations.We find that algorithms will need to account for the local density of sources. In particular the impact of galaxies below the detection limit warrants further study, because magnification changes their number density, resulting in correlations between the lensing signal and multiplicative bias. Although achieving sub-percent accuracy will require further study, we estimate that sufficient archival Hubble Space Te...
Brown, D J
1996-07-01
A mathematical model is described, based on linear transmission line theory, for the computation of hydraulic input impedance spectra in complex, dichotomously branching networks similar to mammalian arterial systems. Conceptually, the networks are constructed from a discretized set of self-similar compliant tubes whose dimensions are described by an integer power law. The model allows specification of the branching geometry, i.e., the daughter-parent branch area ratio and the daughter-daughter area asymmetry ratio, as functions of vessel size. Characteristic impedances of individual vessels are described by linear theory for a fully constrained thick-walled elastic tube. Besides termination impedances and fluid density and viscosity, other model parameters included relative vessel length and phase velocity, each as a function of vessel size (elastic nonuniformity). The primary goal of the study was to examine systematically the effect of fractal branching asymmetry, both degree and location within the network, on the complex input impedance spectrum and reflection coefficient. With progressive branching asymmetry, fractal model spectra exhibit some of the features inherent in natural arterial systems such as the loss of prominent, regularly-occurring maxima and minima; the effect is most apparent at higher frequencies. Marked reduction of the reflection coefficient occurs, due to disparities in wave path length, when branching is asymmetric. Because of path length differences, branching asymmetry near the system input has a far greater effect on minimizing spectrum oscillations and reflections than downstream asymmetry. Fractal-like constructs suggest a means by which arterial trees of realistic complexity might be described, both structurally and functionally.
Taylor, Brian R.; Ratnayake, Nalin A.
2010-01-01
As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will make use of distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. Research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique and validating this technique through simulation and flight test of the X-48B aircraft. The X-48B aircraft is an 8.5 percent-scale hybrid wing body aircraft demonstrator designed by The Boeing Company (Chicago, Illinois, USA), built by Cranfield Aerospace Limited (Cranfield, Bedford, United Kingdom) and flight tested at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California, USA). Based on data from flight test maneuvers performed at Dryden Flight Research Center, aerodynamic parameter estimation was performed using linear regression and output error techniques. An input design technique that uses temporal separation for de-correlation of control surfaces is proposed, and simulation and flight test results are compared with the aerodynamic database. This paper will present a method to determine individual control surface aerodynamic derivatives.
Barreiro, Andrea K.; Ly, Cheng
2017-08-01
Rapid experimental advances now enable simultaneous electrophysiological recording of neural activity at single-cell resolution across large regions of the nervous system. Models of this neural network activity will necessarily increase in size and complexity, thus increasing the computational cost of simulating them and the challenge of analyzing them. Here we present a method to approximate the activity and firing statistics of a general firing rate network model (of the Wilson-Cowan type) subject to noisy correlated background inputs. The method requires solving a system of transcendental equations and is fast compared to Monte Carlo simulations of coupled stochastic differential equations. We implement the method with several examples of coupled neural networks and show that the results are quantitatively accurate even with moderate coupling strengths and an appreciable amount of heterogeneity in many parameters. This work should be useful for investigating how various neural attributes qualitatively affect the spiking statistics of coupled neural networks.
Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models
Energy Technology Data Exchange (ETDEWEB)
Lamboni, Matieyendou [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Monod, Herve, E-mail: herve.monod@jouy.inra.f [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Makowski, David [INRA, UMR Agronomie INRA/AgroParisTech (UMR 211), BP 01, F78850 Thiverval-Grignon (France)
2011-04-15
Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.
Improvement of Continuous Hydrologic Models and HMS SMA Parameters Reduction
Rezaeian Zadeh, Mehdi; Zia Hosseinipour, E.; Abghari, Hirad; Nikian, Ashkan; Shaeri Karimi, Sara; Moradzadeh Azar, Foad
2010-05-01
Hydrological models can help us to predict stream flows and associated runoff volumes of rainfall events within a watershed. There are many different reasons why we need to model the rainfall-runoff processes of for a watershed. However, the main reason is the limitation of hydrological measurement techniques and the costs of data collection at a fine scale. Generally, we are not able to measure all that we would like to know about a given hydrological systems. This is very particularly the case for ungauged catchments. Since the ultimate aim of prediction using models is to improve decision-making about a hydrological problem, therefore, having a robust and efficient modeling tool becomes an important factor. Among several hydrologic modeling approaches, continuous simulation has the best predictions because it can model dry and wet conditions during a long-term period. Continuous hydrologic models, unlike event based models, account for a watershed's soil moisture balance over a long-term period and are suitable for simulating daily, monthly, and seasonal streamflows. In this paper, we describe a soil moisture accounting (SMA) algorithm added to the hydrologic modeling system (HEC-HMS) computer program. As is well known in the hydrologic modeling community one of the ways for improving a model utility is the reduction of input parameters. The enhanced model developed in this study is applied to Khosrow Shirin Watershed, located in the north-west part of Fars Province in Iran, a data limited watershed. The HMS SMA algorithm divides the potential path of rainfall onto a watershed into five zones. The results showed that the output of HMS SMA is insensitive with the variation of many parameters such as soil storage and soil percolation rate. The study's objective is to remove insensitive parameters from the model input using Multi-objective sensitivity analysis. Keywords: Continuous Hydrologic Modeling, HMS SMA, Multi-objective sensitivity analysis, SMA Parameters
Kalicka, Renata; Pietrenko-Dabrowska, Anna
2007-03-01
In the paper MRI measurements are used for assessment of brain tissue perfusion and other features and functions of the brain (cerebral blood flow - CBF, cerebral blood volume - CBV, mean transit time - MTT). Perfusion is an important indicator of tissue viability and functioning as in pathological tissue blood flow, vascular and tissue structure are altered with respect to normal tissue. MRI enables diagnosing diseases at an early stage of their course. The parametric and non-parametric approaches to the identification of MRI models are presented and compared. The non-parametric modeling adopts gamma variate functions. The parametric three-compartmental catenary model, based on the general kinetic model, is also proposed. The parameters of the models are estimated on the basis of experimental data. The goodness of fit of the gamma variate and the three-compartmental models to the data and the accuracy of the parameter estimates are compared. Kalman filtering, smoothing the measurements, was adopted to improve the estimate accuracy of the parametric model. Parametric modeling gives a better fit and better parameter estimates than non-parametric and allows an insight into the functioning of the system. To improve the accuracy optimal experiment design related to the input signal was performed.
The MARINA model (Model to Assess River Inputs of Nutrients to seAs)
Strokal, Maryna; Kroeze, Carolien; Wang, Mengru; Bai, Zhaohai; Ma, Lin
2016-01-01
Chinese agriculture has been developing fast towards industrial food production systems that discharge nutrient-rich wastewater into rivers. As a result, nutrient export by rivers has been increasing, resulting in coastal water pollution. We developed a Model to Assess River Inputs of Nutrients t
Motivation Monitoring and Assessment Extension for Input-Process-Outcome Game Model
Ghergulescu, Ioana; Muntean, Cristina Hava
2014-01-01
This article proposes a Motivation Assessment-oriented Input-Process-Outcome Game Model (MotIPO), which extends the Input-Process-Outcome game model with game-centred and player-centred motivation assessments performed right from the beginning of the game-play. A feasibility case-study involving 67 participants playing an educational game and…
Motivation Monitoring and Assessment Extension for Input-Process-Outcome Game Model
Ghergulescu, Ioana; Muntean, Cristina Hava
2014-01-01
This article proposes a Motivation Assessment-oriented Input-Process-Outcome Game Model (MotIPO), which extends the Input-Process-Outcome game model with game-centred and player-centred motivation assessments performed right from the beginning of the game-play. A feasibility case-study involving 67 participants playing an educational game and…
The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input ...
Energy Technology Data Exchange (ETDEWEB)
Busillo, C.; Calastrini, F.; Gualtieri, G. [Lab. for Meteorol. and Environ. Modell. (LaMMA/CNR-IBIMET), Florence (Italy); Carpentieri, M.; Corti, A. [Dept. of Energetics, Univ. of Florence (Italy); Canepa, E. [INFM, Dept. of Physics, Univ. of Genoa (Italy)
2004-07-01
The behaviour of atmospheric dispersion models is strongly influenced by meteorological input, especially as far as new generation models are concerned. More sophisticated meteorological pre-processors require more extended and more reliable data. This is true in particular when short-term simulations are performed, while in long-term modelling detailed data are less important. In Europe no meteorological standards exist about data, therefore testing and evaluating the results of new generation dispersion models is particularly important in order to obtain information on reliability of model predictions. (orig.)
Institute of Scientific and Technical Information of China (English)
HuanqinLi; JieCheng; BaiwuWan
2004-01-01
A new architecture of wavelet neural network with multi-input-layer is proposed and implemented for modeling a class of large-scale industrial processes. Because the processes are very complicated and the number of technological parameters, which determine the final product quality, is quite large, and these parameters do not make actions at the same time but work in different procedures, the conventional feed-forward neural networks cannot model this set of problems efficiently. The network presented in this paper has several input-layers according to the sequence of work procedure in large-scale industrial production processes. The performance of such networks is analyzed and the network is applied to model the steel plate quality of continuous casting furnace and hot rolling mill. Simulation results indicate that the developed methodology is competent and has well prospects to this set of problems.
PARAMETER ESTIMATION IN BREAD BAKING MODEL
Hadiyanto Hadiyanto; AJB van Boxtel
2012-01-01
Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally pro...
Parameter counting in models with global symmetries
Energy Technology Data Exchange (ETDEWEB)
Berger, Joshua [Institute for High Energy Phenomenology, Newman Laboratory of Elementary Particle Physics, Cornell University, Ithaca, NY 14853 (United States)], E-mail: jb454@cornell.edu; Grossman, Yuval [Institute for High Energy Phenomenology, Newman Laboratory of Elementary Particle Physics, Cornell University, Ithaca, NY 14853 (United States)], E-mail: yuvalg@lepp.cornell.edu
2009-05-18
We present rules for determining the number of physical parameters in models with exact flavor symmetries. In such models the total number of parameters (physical and unphysical) needed to described a matrix is less than in a model without the symmetries. Several toy examples are studied in order to demonstrate the rules. The use of global symmetries in studying the minimally supersymmetric standard model (MSSM) is examined.
On parameter estimation in deformable models
DEFF Research Database (Denmark)
Fisker, Rune; Carstensen, Jens Michael
1998-01-01
Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian...... method is based on a modified version of the EM algorithm. Experimental results for a deformable template used for textile inspection are presented...
Cosmological models with constant deceleration parameter
Energy Technology Data Exchange (ETDEWEB)
Berman, M.S.; de Mello Gomide, F.
1988-02-01
Berman presented elsewhere a law of variation for Hubble's parameter that yields constant deceleration parameter models of the universe. By analyzing Einstein, Pryce-Hoyle and Brans-Dicke cosmologies, we derive here the necessary relations in each model, considering a perfect fluid.
Weigand, M.; Kemna, A.
2016-06-01
Spectral induced polarization (SIP) data are commonly analysed using phenomenological models. Among these models the Cole-Cole (CC) model is the most popular choice to describe the strength and frequency dependence of distinct polarization peaks in the data. More flexibility regarding the shape of the spectrum is provided by decomposition schemes. Here the spectral response is decomposed into individual responses of a chosen elementary relaxation model, mathematically acting as kernel in the involved integral, based on a broad range of relaxation times. A frequently used kernel function is the Debye model, but also the CC model with some other a priorly specified frequency dispersion (e.g. Warburg model) has been proposed as kernel in the decomposition. The different decomposition approaches in use, also including conductivity and resistivity formulations, pose the question to which degree the integral spectral parameters typically derived from the obtained relaxation time distribution are biased by the approach itself. Based on synthetic SIP data sampled from an ideal CC response, we here investigate how the two most important integral output parameters deviate from the corresponding CC input parameters. We find that the total chargeability may be underestimated by up to 80 per cent and the mean relaxation time may be off by up to three orders of magnitude relative to the original values, depending on the frequency dispersion of the analysed spectrum and the proximity of its peak to the frequency range limits considered in the decomposition. We conclude that a quantitative comparison of SIP parameters across different studies, or the adoption of parameter relationships from other studies, for example when transferring laboratory results to the field, is only possible on the basis of a consistent spectral analysis procedure. This is particularly important when comparing effective CC parameters with spectral parameters derived from decomposition results.
Parameter Estimation for Groundwater Models under Uncertain Irrigation Data.
Demissie, Yonas; Valocchi, Albert; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Parameter estimation for groundwater models under uncertain irrigation data
Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.
Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin
2015-02-01
To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.
Translation of CODEV Lens Model To IGES Input File
Wise, T. D.; Carlin, B. B.
1986-10-01
The design of modern optical systems is not a trivial task; even more difficult is the requirement for an opticker to accurately describe the physical constraints implicit in his design so that a mechanical designer can correctly mount the optical elements. Typical concerns include setback of baffles, obstruction of clear apertures by mounting hardware, location of the image plane with respect to fiducial marks, and the correct interpretation of systems having odd geometry. The presence of multiple coordinate systems (optical, mechan-ical, system test, and spacecraft) only exacerbates an already difficult situation. A number of successful optical design programs, such as CODEV (1), have come into existence over the years while the development of Computer Aided Design (CAD) and Computer Aided Manufacturing (CAM) has allowed a number of firms to install "paperless" design systems. In such a system, a part which is entered by keyboard, or pallet, is made into a real physical piece on a milling machine which has received its instructions from the design system. However, a persistent problem is the lack of a link between the optical design programs and the mechanical CAD programs. This paper will describe a first step which has been taken to bridge this gap. Starting with the neutral plot file generated by the CODEV optical design program, we have been able to produce a file suitable for input to the ANVIL (2) and GEOMOD (3) software packages, using the International Graphics Exchange Standard (IGES) interface. This is accomplished by software of our design, which runs on a VAX (4) system. A description of the steps to be taken in transferring a design will be provided. We shall also provide some examples of designs on which this technique has been used successfully. Finally, we shall discuss limitations of the existing software and suggest some improvements which might be undertaken.
Spatial Statistical Procedures to Validate Input Data in Energy Models
Energy Technology Data Exchange (ETDEWEB)
Johannesson, G.; Stewart, J.; Barr, C.; Brady Sabeff, L.; George, R.; Heimiller, D.; Milbrandt, A.
2006-01-01
Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the abovementioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.
Spatial Statistical Procedures to Validate Input Data in Energy Models
Energy Technology Data Exchange (ETDEWEB)
Lawrence Livermore National Laboratory
2006-01-27
Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy-related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the above-mentioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.
Trait Characteristics of Diffusion Model Parameters
Directory of Open Access Journals (Sweden)
Anna-Lena Schubert
2016-07-01
Full Text Available Cognitive modeling of response time distributions has seen a huge rise in popularity in individual differences research. In particular, several studies have shown that individual differences in the drift rate parameter of the diffusion model, which reflects the speed of information uptake, are substantially related to individual differences in intelligence. However, if diffusion model parameters are to reflect trait-like properties of cognitive processes, they have to qualify as trait-like variables themselves, i.e., they have to be stable across time and consistent over different situations. To assess their trait characteristics, we conducted a latent state-trait analysis of diffusion model parameters estimated from three response time tasks that 114 participants completed at two laboratory sessions eight months apart. Drift rate, boundary separation, and non-decision time parameters showed a great temporal stability over a period of eight months. However, the coefficients of consistency and reliability were only low to moderate and highest for drift rate parameters. These results show that the consistent variance of diffusion model parameters across tasks can be regarded as temporally stable ability parameters. Moreover, they illustrate the need for using broader batteries of response time tasks in future studies on the relationship between diffusion model parameters and intelligence.
Parameter identification in the logistic STAR model
DEFF Research Database (Denmark)
Ekner, Line Elvstrøm; Nejstgaard, Emil
We propose a new and simple parametrization of the so-called speed of transition parameter of the logistic smooth transition autoregressive (LSTAR) model. The new parametrization highlights that a consequence of the well-known identification problem of the speed of transition parameter is that th......We propose a new and simple parametrization of the so-called speed of transition parameter of the logistic smooth transition autoregressive (LSTAR) model. The new parametrization highlights that a consequence of the well-known identification problem of the speed of transition parameter...
"Updates to Model Algorithms & Inputs for the Biogenic ...
We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observations. This has resulted in improvements in model evaluations of modeled isoprene, NOx, and O3. The National Exposure Research Laboratory (NERL) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA mission to protect human health and the environment. AMAD research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollution problem, but also in developing emission control policies and regulations for air quality improvements.
Tarasova, L.; Knoche, M.; Dietrich, J.; Merz, R.
2016-06-01
Glacierized high-mountainous catchments are often the water towers for downstream region, and modeling these remote areas are often the only available tool for the assessment of water resources availability. Nevertheless, data scarcity affects different aspects of hydrological modeling in such mountainous glacierized basins. On the example of poorly gauged glacierized catchment in Central Asia, we examined the effects of input discretization, model complexity, and calibration strategy on model performance. The study was conducted with the GSM-Socont model driven with climatic input from the corrected High Asia Reanalysis data set of two different discretizations. We analyze the effects of the use of long-term glacier volume loss, snow cover images, and interior runoff as an additional calibration data. In glacierized catchments with winter accumulation type, where the transformation of precipitation into runoff is mainly controlled by snow and glacier melt processes, the spatial discretization of precipitation tends to have less impact on simulated runoff than a correct prediction of the integral precipitation volume. Increasing model complexity by using spatially distributed input or semidistributed parameters values does not increase model performance in the Gunt catchment, as the more complex model tends to be more sensitive to errors in the input data set. In our case, better model performance and quantification of the flow components can be achieved by additional calibration data, rather than by using a more distributed model parameters. However, a semidistributed model better predicts the spatial patterns of snow accumulation and provides more plausible runoff predictions at the interior sites.
Parameter Estimation of Partial Differential Equation Models
Xun, Xiaolei
2013-09-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.
Application of lumped-parameter models
Energy Technology Data Exchange (ETDEWEB)
Ibsen, Lars Bo; Liingaard, M.
2006-12-15
This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil. Subsequently, the assembly of the dynamic stiffness matrix for the foundation is considered, and the solution for obtaining the steady state response, when using lumped-parameter models is given. (au)
Evapotranspiration Input Data for the Central Valley Hydrologic Model (CVHM)
U.S. Geological Survey, Department of the Interior — This digital dataset contains monthly reference evapotranspiration (ETo) data for the Central Valley Hydrologic Model (CVHM). The Central Valley encompasses an...
Using Crowd Sensed Data as Input to Congestion Model
DEFF Research Database (Denmark)
Lehmann, Anders; Gross, Allan
2016-01-01
. To get accurate and timely information on traffic congestion, and by extension information on air pollution, near real time traffic models are needed. We present in this paper an implementation of the Restricted Stochastic User equilibrium model, that is capable to model congestions for very large Urban......Emission of airborne pollutants and climate gasses from the transport sector is a growing problem, both in indus- trialised and developing countries. Planning of urban transport system is essential to minimise the environmental, health and economic impact of congestion in the transport system...... traffic systems, in less than an hour. The model is implemented in an open source database system, for easy interface with GIS resources and crowd sensed transportation data....
Indian Academy of Sciences (India)
Nishi Srivastava; S K Satheesh; Nadège Blond
2014-08-01
The objective of this study is to evaluate the ability of a European chemistry transport model, ‘CHIMERE’ driven by the US meteorological model MM5, in simulating aerosol concentrations [dust, PM10 and black carbon (BC)] over the Indian region. An evaluation of a meteorological event (dust storm); impact of change in soil-related parameters and meteorological input grid resolution on these aerosol concentrations has been performed. Dust storm simulation over Indo-Gangetic basin indicates ability of the model to capture dust storm events. Measured (AERONET data) and simulated parameters such as aerosol optical depth (AOD) and Angstrom exponent are used to evaluate the performance of the model to capture the dust storm event. A sensitivity study is performed to investigate the impact of change in soil characteristics (thickness of the soil layer in contact with air, volumetric water, and air content of the soil) and meteorological input grid resolution on the aerosol (dust, PM10, BC) distribution. Results show that soil parameters and meteorological input grid resolution have an important impact on spatial distribution of aerosol (dust, PM10, BC) concentrations.
Prasad, Kanchan; Gorai, Amit Kumar; Goyal, Pramila
2016-03-01
This study aims to develop adaptive neuro-fuzzy inference system (ANFIS) for forecasting of daily air pollution concentrations of five air pollutants [sulphur dioxide (SO2), nitrogen dioxide (NO2), carbon monoxide (CO), ozone (O3) and particular matters (PM10)] in the atmosphere of a Megacity (Howrah). Air pollution in the city (Howrah) is rising in parallel with the economics and thus observing, forecasting and controlling the air pollution becomes increasingly important due to the health impact. ANFIS serve as a basis for constructing a set of fuzzy IF-THEN rules, with appropriate membership functions to generate the stipulated input-output pairs. The ANFIS model predictor considers the value of meteorological factors (pressure, temperature, relative humidity, dew point, visibility, wind speed, and precipitation) and previous day's pollutant concentration in different combinations as the inputs to predict the 1-day advance and same day air pollution concentration. The concentration value of five air pollutants and seven meteorological parameters of the Howrah city during the period 2009 to 2011 were used for development of the ANFIS model. Collinearity tests were conducted to eliminate the redundant input variables. A forward selection (FS) method is used for selecting the different subsets of input variables. Application of collinearity tests and FS techniques reduces the numbers of input variables and subsets which helps in reducing the computational cost and time. The performances of the models were evaluated on the basis of four statistical indices (coefficient of determination, normalized mean square error, index of agreement, and fractional bias).
Shaffer, S. R.
2015-12-01
A method for representing grid-scale heterogeneous development density for urban climate models from probability density functions of sub-grid resolution observed data is proposed. Derived values are evaluated in relation to normalized Shannon Entropy to provide guidance in assessing model input data. Urban fraction for dominant and mosaic urban class contributions are estimated by combining analysis of 30-meter resolution National Land Cover Database 2006 data products for continuous impervious surface area and categorical land cover. The method aims at reducing model error through improvement of urban parameterization and representation of observations employed as input data. The multi-scale variation of parameter values are demonstrated for several methods of utilizing input. The method provides multi-scale and spatial guidance for determining where parameterization schemes may be mis-representing heterogeneity of input data, along with motivation for employing mosaic techniques based upon assessment of input data. The proposed method has wider potential for geographic application, and complements data products which focus on characterizing central business districts. The method enables obtaining urban fraction dependent upon resolution and class partition scheme, based upon improved parameterization of observed data, which provides one means of influencing simulation prediction at various aggregated grid scales.
Input-dependent wave attenuation in a critically-balanced model of cortex.
Directory of Open Access Journals (Sweden)
Xiao-Hu Yan
Full Text Available A number of studies have suggested that many properties of brain activity can be understood in terms of critical systems. However it is still not known how the long-range susceptibilities characteristic of criticality arise in the living brain from its local connectivity structures. Here we prove that a dynamically critically-poised model of cortex acquires an infinitely-long ranged susceptibility in the absence of input. When an input is presented, the susceptibility attenuates exponentially as a function of distance, with an increasing spatial attenuation constant (i.e., decreasing range the larger the input. This is in direct agreement with recent results that show that waves of local field potential activity evoked by single spikes in primary visual cortex of cat and macaque attenuate with a characteristic length that also increases with decreasing contrast of the visual stimulus. A susceptibility that changes spatial range with input strength can be thought to implement an input-dependent spatial integration: when the input is large, no additional evidence is needed in addition to the local input; when the input is weak, evidence needs to be integrated over a larger spatial domain to achieve a decision. Such input-strength-dependent strategies have been demonstrated in visual processing. Our results suggest that input-strength dependent spatial integration may be a natural feature of a critically-balanced cortical network.
High Flux Isotope Reactor system RELAP5 input model
Energy Technology Data Exchange (ETDEWEB)
Morris, D.G.; Wendel, M.W.
1993-01-01
A thermal-hydraulic computational model of the High Flux Isotope Reactor (HFIR) has been developed using the RELAP5 program. The purpose of the model is to provide a state-of-the art thermal-hydraulic simulation tool for analyzing selected hypothetical accident scenarios for a revised HFIR Safety Analysis Report (SAR). The model includes (1) a detailed representation of the reactor core and other vessel components, (2) three heat exchanger/pump cells, (3) pressurizing pumps and letdown valves, and (4) secondary coolant system (with less detail than the primary system). Data from HFIR operation, component tests, tests in facility mockups and the HFIR, HFIR specific experiments, and other pertinent experiments performed independent of HFIR were used to construct the model and validate it to the extent permitted by the data. The detailed version of the model has been used to simulate loss-of-coolant accidents (LOCAs), while the abbreviated version has been developed for the operational transients that allow use of a less detailed nodalization. Analysis of station blackout with core long-term decay heat removal via natural convection has been performed using the core and vessel portions of the detailed model.
Regional input-output models and the treatment of imports in the European System of Accounts
Kronenberg, Tobias
2011-01-01
Input-output models are often used in regional science due to their versatility and their ability to capture many of the distinguishing features of a regional economy. Input-output tables are available for all EU member countries, but they are hard to find at the regional level, since many regional governments lack the resources or the will to produce reliable, survey-based regional input-output tables. Therefore, in many cases researchers adopt nonsurvey techniques to derive regional input-o...
Large uncertainty in soil carbon modelling related to carbon input calculation method
DEFF Research Database (Denmark)
Keel, Sonja; Leifeld, Jens; Mayer, Jochen
2017-01-01
The application of dynamic models to report changes in soil organic carbon (SOC) stocks, for example as part of greenhouse gas inventories, is becoming increasingly important. Most of these models rely on input data from harvest residues or decaying plant parts and also organic fertilizer, together...... referred to as soil carbon inputs (C). The soil C inputs from plants are derived from measured agricultural yields using allometric equations. Here we compared the results of five previously published equations. Our goal was to test whether the choice of method is critical for modelling soil C and if so......, which of these equations is most suitable for Swiss conditions. For this purpose we used the five equations to calculate soil C inputs based on yield data from a Swiss long-term cropping experiment. Estimated annual soil C inputs from various crops were averaged from 28 years and four fertilizer...
Scientific and technical advisory committee review of the nutrient inputs to the watershed model
The following is a report by a STAC Review Team concerning the methods and documentation used by the Chesapeake Bay Partnership for evaluation of nutrient inputs to Phase 6 of the Chesapeake Bay Watershed Model. The “STAC Review of the Nutrient Inputs to the Watershed Model” (previously referred to...
From LCC to LCA Using a Hybrid Input Output Model – A Maritime Case Study
DEFF Research Database (Denmark)
Kjær, Louise Laumann; Pagoropoulos, Aris; Hauschild, Michael Zwicky;
2015-01-01
As companies try to embrace life cycle thinking, Life Cycle Assessment (LCA) and Life Cycle Costing (LCC) have proven to be powerful tools. In this paper, an Environmental Input-Output model is used for analysis as it enables an LCA using the same economic input data as LCC. This approach helps...
Energy Technology Data Exchange (ETDEWEB)
Coffield, T; Patricia Lee, P
2007-01-31
The purpose of this report is to update parameters utilized in Human Health Exposure calculations and Bioaccumulation Transfer Factors utilized at SRS for Performance Assessment modeling. The reason for the update is to utilize more recent information issued, validate information currently used and correct minor inconsistencies between modeling efforts performed in SRS contiguous areas of the heavy industrialized central site usage areas called the General Separations Area (GSA). SRS parameters utilized were compared to a number of other DOE facilities and generic national/global references to establish relevance of the parameters selected and/or verify the regional differences of the southeast USA. The parameters selected were specifically chosen to be expected values along with identifying a range for these values versus the overly conservative specification of parameters for estimating an annual dose to the maximum exposed individual (MEI). The end uses are to establish a standardized source for these parameters that is up to date with existing data and maintain it via review of any future issued national references to evaluate the need for changes as new information is released. These reviews are to be added to this document by revision.
User requirements for hydrological models with remote sensing input
Energy Technology Data Exchange (ETDEWEB)
Kolberg, Sjur
1997-10-01
Monitoring the seasonal snow cover is important for several purposes. This report describes user requirements for hydrological models utilizing remotely sensed snow data. The information is mainly provided by operational users through a questionnaire. The report is primarily intended as a basis for other work packages within the Snow Tools project which aim at developing new remote sensing products for use in hydrological models. The HBV model is the only model mentioned by users in the questionnaire. It is widely used in Northern Scandinavia and Finland, in the fields of hydroelectric power production, flood forecasting and general monitoring of water resources. The current implementation of HBV is not based on remotely sensed data. Even the presently used HBV implementation may benefit from remotely sensed data. However, several improvements can be made to hydrological models to include remotely sensed snow data. Among these the most important are a distributed version, a more physical approach to the snow depletion curve, and a way to combine data from several sources. 1 ref.
Tracking cellular telephones as an input for developing transport models
CSIR Research Space (South Africa)
Cooper, Antony K
2010-08-01
Full Text Available of tracking cellular telephones and using the data to populate transport and other models. We report here on one of the pilots, known as DYNATRACK (Dynamic Daily Path Tracking), a larger experiment conducted in 2007 with a more heterogeneous group of commuters...
Three-Verb Clusters in Interference Frisian: A Stochastic Model over Sequential Syntactic Input.
Hoekstra, Eric; Versloot, Arjen
2016-03-01
Abstract Interference Frisian (IF) is a variety of Frisian, spoken by mostly younger speakers, which is heavily influenced by Dutch. IF exhibits all six logically possible word orders in a cluster of three verbs. This phenomenon has been researched by Koeneman and Postma (2006), who argue for a parameter theory, which leaves frequency differences between various orders unexplained. Rejecting Koeneman and Postma's parameter theory, but accepting their conclusion that Dutch (and Frisian) data are input for the grammar of IF, we will argue that the word order preferences of speakers of IF are determined by frequency and similarity. More specifically, three-verb clusters in IF are sensitive to: their linear left-to-right similarity to two-verb clusters and three-verb clusters in Frisian and in Dutch; the (estimated) frequency of two- and three-verb clusters in Frisian and Dutch. The model will be shown to work best if Dutch and Frisian, and two- and three-verb clusters, have equal impact factors. If different impact factors are taken, the model's predictions do not change substantially, testifying to its robustness. This analysis is in line with recent ideas that the sequential nature of human speech is more important to syntactic processes than commonly assumed, and that less burden need be put on the hierarchical dimension of syntactic structure.
Statefinder parameters in two dark energy models
Panotopoulos, Grigoris
2007-01-01
The statefinder parameters ($r,s$) in two dark energy models are studied. In the first, we discuss in four-dimensional General Relativity a two fluid model, in which dark energy and dark matter are allowed to interact with each other. In the second model, we consider the DGP brane model generalized by taking a possible energy exchange between the brane and the bulk into account. We determine the values of the statefinder parameters that correspond to the unique attractor of the system at hand. Furthermore, we produce plots in which we show $s,r$ as functions of red-shift, and the ($s-r$) plane for each model.
Šiljić Tomić, Aleksandra N; Antanasijević, Davor Z; Ristić, Mirjana Đ; Perić-Grujić, Aleksandra A; Pocajt, Viktor V
2016-05-01
This paper describes the application of artificial neural network models for the prediction of biological oxygen demand (BOD) levels in the Danube River. Eighteen regularly monitored water quality parameters at 17 stations on the river stretch passing through Serbia were used as input variables. The optimization of the model was performed in three consecutive steps: firstly, the spatial influence of a monitoring station was examined; secondly, the monitoring period necessary to reach satisfactory performance was determined; and lastly, correlation analysis was applied to evaluate the relationship among water quality parameters. Root-mean-square error (RMSE) was used to evaluate model performance in the first two steps, whereas in the last step, multiple statistical indicators of performance were utilized. As a result, two optimized models were developed, a general regression neural network model (labeled GRNN-1) that covers the monitoring stations from the Danube inflow to the city of Novi Sad and a GRNN model (labeled GRNN-2) that covers the stations from the city of Novi Sad to the border with Romania. Both models demonstrated good agreement between the predicted and actually observed BOD values.
Parameter Symmetry of the Interacting Boson Model
Shirokov, A M; Smirnov, Yu F; Shirokov, Andrey M.; Smirnov, Yu. F.
1998-01-01
We discuss the symmetry of the parameter space of the interacting boson model (IBM). It is shown that for any set of the IBM Hamiltonian parameters (with the only exception of the U(5) dynamical symmetry limit) one can always find another set that generates the equivalent spectrum. We discuss the origin of the symmetry and its relevance for physical applications.
Human task animation from performance models and natural language input
Esakov, Jeffrey; Badler, Norman I.; Jung, Moon
1989-01-01
Graphical manipulation of human figures is essential for certain types of human factors analyses such as reach, clearance, fit, and view. In many situations, however, the animation of simulated people performing various tasks may be based on more complicated functions involving multiple simultaneous reaches, critical timing, resource availability, and human performance capabilities. One rather effective means for creating such a simulation is through a natural language description of the tasks to be carried out. Given an anthropometrically-sized figure and a geometric workplace environment, various simple actions such as reach, turn, and view can be effectively controlled from language commands or standard NASA checklist procedures. The commands may also be generated by external simulation tools. Task timing is determined from actual performance models, if available, such as strength models or Fitts' Law. The resulting action specification are animated on a Silicon Graphics Iris workstation in real-time.
Tumor Growth Model with PK Input for Neuroblastoma Drug Development
2015-09-01
9/2012 - 4/30/2017 2.40 calendar NCI Anticancer Drug Pharmacology in Very Young Children The proposed studies will use pharmacokinetic... anticancer drugs . DOD W81XWH-14-1-0103 CA130396 (Stewart) 9/1/2014 - 8/31/2016 .60 calendar DOD-DEPARTMENT OF THE ARMY Tumor Growth Model with PK... anticancer drugs . .60 calendar V Foundation Translational (Stewart) 11/1/2012-10/31/2015 THE V FDN FOR CA RES Identification & preclinical testing
Setting Parameters for Biological Models With ANIMO
Schivo, Stefano; Scholma, Jetse; Karperien, Hermanus Bernardus Johannes; Post, Janine Nicole; van de Pol, Jan Cornelis; Langerak, Romanus; André, Étienne; Frehse, Goran
2014-01-01
ANIMO (Analysis of Networks with Interactive MOdeling) is a software for modeling biological networks, such as e.g. signaling, metabolic or gene networks. An ANIMO model is essentially the sum of a network topology and a number of interaction parameters. The topology describes the interactions
Influence of input matrix representation on topic modelling performance
CSIR Research Space (South Africa)
De Waal, A
2010-11-01
Full Text Available model, perplexity is an appropriate measure. It provides an indication of the model’s ability to generalise by measuring the exponent of the mean log-likelihood of words in a held-out test set of the corpus. The exploratory abilities of the latent.... The phrases are clearly more intelligible than only single word phrases in many cases, thus demonstrating the qualitative advantage of the proposed method. 1For the CRAN corpus, each subset of chunks includes the top 1000 chunks with the highest...
How sensitive are estimates of carbon fixation in agricultural models to input data?
Directory of Open Access Journals (Sweden)
Tum Markus
2012-02-01
Full Text Available Abstract Background Process based vegetation models are central to understand the hydrological and carbon cycle. To achieve useful results at regional to global scales, such models require various input data from a wide range of earth observations. Since the geographical extent of these datasets varies from local to global scale, data quality and validity is of major interest when they are chosen for use. It is important to assess the effect of different input datasets in terms of quality to model outputs. In this article, we reflect on both: the uncertainty in input data and the reliability of model results. For our case study analysis we selected the Marchfeld region in Austria. We used independent meteorological datasets from the Central Institute for Meteorology and Geodynamics and the European Centre for Medium-Range Weather Forecasts (ECMWF. Land cover / land use information was taken from the GLC2000 and the CORINE 2000 products. Results For our case study analysis we selected two different process based models: the Environmental Policy Integrated Climate (EPIC and the Biosphere Energy Transfer Hydrology (BETHY/DLR model. Both process models show a congruent pattern to changes in input data. The annual variability of NPP reaches 36% for BETHY/DLR and 39% for EPIC when changing major input datasets. However, EPIC is less sensitive to meteorological input data than BETHY/DLR. The ECMWF maximum temperatures show a systematic pattern. Temperatures above 20°C are overestimated, whereas temperatures below 20°C are underestimated, resulting in an overall underestimation of NPP in both models. Besides, BETHY/DLR is sensitive to the choice and accuracy of the land cover product. Discussion This study shows that the impact of input data uncertainty on modelling results need to be assessed: whenever the models are applied under new conditions, local data should be used for both input and result comparison.
How sensitive are estimates of carbon fixation in agricultural models to input data?
Tum, Markus; Strauss, Franziska; McCallum, Ian; Günther, Kurt; Schmid, Erwin
2012-02-01
Process based vegetation models are central to understand the hydrological and carbon cycle. To achieve useful results at regional to global scales, such models require various input data from a wide range of earth observations. Since the geographical extent of these datasets varies from local to global scale, data quality and validity is of major interest when they are chosen for use. It is important to assess the effect of different input datasets in terms of quality to model outputs. In this article, we reflect on both: the uncertainty in input data and the reliability of model results. For our case study analysis we selected the Marchfeld region in Austria. We used independent meteorological datasets from the Central Institute for Meteorology and Geodynamics and the European Centre for Medium-Range Weather Forecasts (ECMWF). Land cover / land use information was taken from the GLC2000 and the CORINE 2000 products. For our case study analysis we selected two different process based models: the Environmental Policy Integrated Climate (EPIC) and the Biosphere Energy Transfer Hydrology (BETHY/DLR) model. Both process models show a congruent pattern to changes in input data. The annual variability of NPP reaches 36% for BETHY/DLR and 39% for EPIC when changing major input datasets. However, EPIC is less sensitive to meteorological input data than BETHY/DLR. The ECMWF maximum temperatures show a systematic pattern. Temperatures above 20°C are overestimated, whereas temperatures below 20°C are underestimated, resulting in an overall underestimation of NPP in both models. Besides, BETHY/DLR is sensitive to the choice and accuracy of the land cover product. This study shows that the impact of input data uncertainty on modelling results need to be assessed: whenever the models are applied under new conditions, local data should be used for both input and result comparison.
Delineating Parameter Unidentifiabilities in Complex Models
Raman, Dhruva V; Papachristodoulou, Antonis
2016-01-01
Scientists use mathematical modelling to understand and predict the properties of complex physical systems. In highly parameterised models there often exist relationships between parameters over which model predictions are identical, or nearly so. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, and the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast timescale subsystems, as well as the regimes in which such approximations are valid. We base our algorithm on a novel quantification of regional parametric sensitivity: multiscale sloppiness. Traditional...
Pritchard, David; Fowler, Hayley; Forsythe, Nathan; O'Donnell, Greg; Rutter, Nick; Bardossy, Andras
2017-04-01
Snow and glacier melt in the mountainous Upper Indus Basin (UIB) sustain water supplies, irrigation networks, hydropower production and ecosystems in extensive downstream lowlands. Understanding hydrological and cryospheric sensitivities to climatic variability and change in the basin is therefore critical for local, national and regional water resources management. Assessing these sensitivities using numerical modelling is challenging, due to limitations in the quality and quantity of input and evaluation data, as well as uncertainties in model structures and parameters. This study explores how these uncertainties in inputs and process parameterisations affect distributed simulations of ablation in the complex climatic setting of the UIB. The role of model forcing uncertainties is explored using combinations of local observations, remote sensing and reanalysis - including the high resolution High Asia Refined Analysis - to generate multiple realisations of spatiotemporal model input fields. Forcing a range of model structures with these input fields then provides an indication of how different ablation parameterisations respond to uncertainties and perturbations in climatic drivers. Model structures considered include simple, empirical representations of melt processes through to physically based, full energy balance models with multi-physics options for simulating snowpack evolution (including an adapted version of FSM). Analysing model input and structural uncertainties in this way provides insights for methodological choices in climate sensitivity assessments of data-sparse, high mountain catchments. Such assessments are key for supporting water resource management in these catchments, particularly given the potential complications of enhanced warming through elevation effects or, in the case of the UIB, limited understanding of how and why local climate change signals differ from broader patterns.
Parameter Estimation, Model Reduction and Quantum Filtering
Chase, Bradley A
2009-01-01
This dissertation explores the topics of parameter estimation and model reduction in the context of quantum filtering. Chapters 2 and 3 provide a review of classical and quantum probability theory, stochastic calculus and filtering. Chapter 4 studies the problem of quantum parameter estimation and introduces the quantum particle filter as a practical computational method for parameter estimation via continuous measurement. Chapter 5 applies these techniques in magnetometry and studies the estimator's uncertainty scalings in a double-pass atomic magnetometer. Chapter 6 presents an efficient feedback controller for continuous-time quantum error correction. Chapter 7 presents an exact model of symmetric processes of collective qubit systems.
Miller, L. D.; Tom, C.; Nualchawee, K.
1977-01-01
A tropical forest area of Northern Thailand provided a test case of the application of the approach in more natural surroundings. Remote sensing imagery subjected to proper computer analysis has been shown to be a very useful means of collecting spatial data for the science of hydrology. Remote sensing products provide direct input to hydrologic models and practical data bases for planning large and small-scale hydrologic developments. Combining the available remote sensing imagery together with available map information in the landscape model provides a basis for substantial improvements in these applications.
Researches on the Model of Telecommunication Service with Variable Input Tariff Rates
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The paper sets up and studies the model of the telecommunication queue servicing system with variable input tariff rates, which can relieve the crowding system traffic flows during the busy hour to enhance the utilizing rate of the telecom's resources.
Enhancing debris flow modeling parameters integrating Bayesian networks
Graf, C.; Stoffel, M.; Grêt-Regamey, A.
2009-04-01
Applied debris-flow modeling requires suitably constraint input parameter sets. Depending on the used model, there is a series of parameters to define before running the model. Normally, the data base describing the event, the initiation conditions, the flow behavior, the deposition process and mainly the potential range of possible debris flow events in a certain torrent is limited. There are only some scarce places in the world, where we fortunately can find valuable data sets describing event history of debris flow channels delivering information on spatial and temporal distribution of former flow paths and deposition zones. Tree-ring records in combination with detailed geomorphic mapping for instance provide such data sets over a long time span. Considering the significant loss potential associated with debris-flow disasters, it is crucial that decisions made in regard to hazard mitigation are based on a consistent assessment of the risks. This in turn necessitates a proper assessment of the uncertainties involved in the modeling of the debris-flow frequencies and intensities, the possible run out extent, as well as the estimations of the damage potential. In this study, we link a Bayesian network to a Geographic Information System in order to assess debris-flow risk. We identify the major sources of uncertainty and show the potential of Bayesian inference techniques to improve the debris-flow model. We model the flow paths and deposition zones of a highly active debris-flow channel in the Swiss Alps using the numerical 2-D model RAMMS. Because uncertainties in run-out areas cause large changes in risk estimations, we use the data of flow path and deposition zone information of reconstructed debris-flow events derived from dendrogeomorphological analysis covering more than 400 years to update the input parameters of the RAMMS model. The probabilistic model, which consistently incorporates this available information, can serve as a basis for spatial risk
Song, Dong; Chan, Rosa H M; Marmarelis, Vasilis Z; Hampson, Robert E; Deadwyler, Sam A; Berger, Theodore W
2007-01-01
Multiple-input multiple-output nonlinear dynamic model of spike train to spike train transformations was previously formulated for hippocampal-cortical prostheses. This paper further described the statistical methods of selecting significant inputs (self-terms) and interactions between inputs (cross-terms) of this Volterra kernel-based model. In our approach, model structure was determined by progressively adding self-terms and cross-terms using a forward stepwise model selection technique. Model coefficients were then pruned based on Wald test. Results showed that the reduced kernel models, which contained much fewer coefficients than the full Volterra kernel model, gave good fits to the novel data. These models could be used to analyze the functional interactions between neurons during behavior.
Institute of Scientific and Technical Information of China (English)
Fan Hong-Yi; Hu Li-Yun
2009-01-01
This paper proves a new theorem on the relationship between optical field Wigner function's two-parameter Radon transform and optical Fresnel transform of the field, I.e., when an input field ψ(x') propagates through an optical [D (-B) (-C) A] system, the energy density of the output field is equal to the Radon transform of the Wigner function of the input field, where the Radon transform parameters are D, B. It prove this theorem in both spatial-domain and frequency-domain, in the latter case the Radon transform parameters are A, C 7.
Parameter and State Estimator for State Space Models
Directory of Open Access Journals (Sweden)
Ruifeng Ding
2014-01-01
Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.
Parameter and state estimator for state space models.
Ding, Ruifeng; Zhuang, Linfan
2014-01-01
This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.
Delineating parameter unidentifiabilities in complex models
Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis
2017-03-01
Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.
Systematic parameter inference in stochastic mesoscopic modeling
Lei, Huan; Yang, Xiu; Li, Zhen; Karniadakis, George Em
2017-02-01
We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are "sparse". The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.
Application of lumped-parameter models
DEFF Research Database (Denmark)
Ibsen, Lars Bo; Liingaard, Morten
This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil (section 1.1). Subse...
Models and parameters for environmental radiological assessments
Energy Technology Data Exchange (ETDEWEB)
Miller, C W [ed.
1984-01-01
This book presents a unified compilation of models and parameters appropriate for assessing the impact of radioactive discharges to the environment. Models examined include those developed for the prediction of atmospheric and hydrologic transport and deposition, for terrestrial and aquatic food-chain bioaccumulation, and for internal and external dosimetry. Chapters have been entered separately into the data base. (ACR)
2016-01-01
International audience; An important step in the operational modal analysis of a structure is to infer on its dynamic behavior through its modal parameters. They can be estimated by various modal identification algorithms that fit a theoretical model to measured data. When output-only data is available, i.e. measured responses of the structure, frequencies, damping ratios and mode shapes can be identified assuming that ambient sources like wind or traffic excite the system sufficiently. When ...
Multi-bump solutions in a neural field model with external inputs
Ferreira, Flora; Erlhagen, Wolfram; Bicho, Estela
2016-07-01
We study the conditions for the formation of multiple regions of high activity or "bumps" in a one-dimensional, homogeneous neural field with localized inputs. Stable multi-bump solutions of the integro-differential equation have been proposed as a model of a neural population representation of remembered external stimuli. We apply a class of oscillatory coupling functions and first derive criteria to the input width and distance, which relate to the synaptic couplings that guarantee the existence and stability of one and two regions of high activity. These input-induced patterns are attracted by the corresponding stable one-bump and two-bump solutions when the input is removed. We then extend our analytical and numerical investigation to N-bump solutions showing that the constraints on the input shape derived for the two-bump case can be exploited to generate a memory of N > 2 localized inputs. We discuss the pattern formation process when either the conditions on the input shape are violated or when the spatial ranges of the excitatory and inhibitory connections are changed. An important aspect for applications is that the theoretical findings allow us to determine for a given coupling function the maximum number of localized inputs that can be stored in a given finite interval.
Directory of Open Access Journals (Sweden)
Daniela Molinari
2017-09-01
Full Text Available IN-depth SYnthetic Model for Flood Damage Estimation (INSYDE is a model for the estimation of flood damage to residential buildings at the micro-scale. This study investigates the sensitivity of INSYDE to the accuracy of input data. Starting from the knowledge of input parameters at the scale of individual buildings for a case study, the level of detail of input data is progressively downgraded until the condition in which a representative value is defined for all inputs at the census block scale. The analysis reveals that two conditions are required to limit the errors in damage estimation: the representativeness of representatives values with respect to micro-scale values and the local knowledge of the footprint area of the buildings, being the latter the main extensive variable adopted by INSYDE. Such a result allows for extending the usability of the model at the meso-scale, also in different countries, depending on the availability of aggregated building data.
Parameters-related uncertainty in modeling sugar cane yield with an agro-Land Surface Model
Valade, A.; Ciais, P.; Vuichard, N.; Viovy, N.; Ruget, F.; Gabrielle, B.
2012-12-01
Agro-Land Surface Models (agro-LSM) have been developed from the coupling of specific crop models and large-scale generic vegetation models. They aim at accounting for the spatial distribution and variability of energy, water and carbon fluxes within soil-vegetation-atmosphere continuum with a particular emphasis on how crop phenology and agricultural management practice influence the turbulent fluxes exchanged with the atmosphere, and the underlying water and carbon pools. A part of the uncertainty in these models is related to the many parameters included in the models' equations. In this study, we quantify the parameter-based uncertainty in the simulation of sugar cane biomass production with the agro-LSM ORCHIDEE-STICS on a multi-regional approach with data from sites in Australia, La Reunion and Brazil. First, the main source of uncertainty for the output variables NPP, GPP, and sensible heat flux (SH) is determined through a screening of the main parameters of the model on a multi-site basis leading to the selection of a subset of most sensitive parameters causing most of the uncertainty. In a second step, a sensitivity analysis is carried out on the parameters selected from the screening analysis at a regional scale. For this, a Monte-Carlo sampling method associated with the calculation of Partial Ranked Correlation Coefficients is used. First, we quantify the sensitivity of the output variables to individual input parameters on a regional scale for two regions of intensive sugar cane cultivation in Australia and Brazil. Then, we quantify the overall uncertainty in the simulation's outputs propagated from the uncertainty in the input parameters. Seven parameters are identified by the screening procedure as driving most of the uncertainty in the agro-LSM ORCHIDEE-STICS model output at all sites. These parameters control photosynthesis (optimal temperature of photosynthesis, optimal carboxylation rate), radiation interception (extinction coefficient), root
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification
Centrifuge modeling of one-step outflow tests for unsaturated parameter estimations
Directory of Open Access Journals (Sweden)
H. Nakajima
2006-05-01
Full Text Available Centrifuge modeling of one-step outflow tests were carried out using a 2-m radius geotechnical centrifuge, and the cumulative outflow and transient pore pressure were measured during the tests at multiple gravity levels. Based on the scaling law of centrifuge modeling, the measurements generally showed reasonable agreement with prototype data calculated from forward simulations with input parameters determined from standard laboratory tests. The parameter optimizations were examined for three different combinations of input data sets using the test measurements. Within the gravity level examined in this study up to 40 g, the optimized unsaturated parameters compared well when accurate pore pressure measurements were included along with cumulative outflow as input data. The centrifuge modeling technique with its capability to implement variety of instrumentations under well controlled initial and boundary conditions, shortens testing time and can provide significant information for the parameter estimation procedure.
Centrifuge modeling of one-step outflow tests for unsaturated parameter estimations
Directory of Open Access Journals (Sweden)
H. Nakajima
2006-01-01
Full Text Available Centrifuge modeling of one-step outflow tests were carried out using a 2-m radius geotechnical centrifuge, and the cumulative outflow and transient pore water pressure were measured during the tests at multiple gravity levels. Based on the scaling laws of centrifuge modeling, the measurements generally showed reasonable agreement with prototype data calculated from forward simulations with input parameters determined from standard laboratory tests. The parameter optimizations were examined for three different combinations of input data sets using the test measurements. Within the gravity level examined in this study up to 40g, the optimized unsaturated parameters compared well when accurate pore water pressure measurements were included along with cumulative outflow as input data. With its capability to implement variety of instrumentations under well controlled initial and boundary conditions and to shorten testing time, the centrifuge modeling technique is attractive as an alternative experimental method that provides more freedom to set inverse problem conditions for the parameter estimation.
Estimation of Model Parameters for Steerable Needles
Park, Wooram; Reed, Kyle B.; Okamura, Allison M.; Chirikjian, Gregory S.
2010-01-01
Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%. PMID:21643451
Estimation of Model Parameters for Steerable Needles.
Park, Wooram; Reed, Kyle B; Okamura, Allison M; Chirikjian, Gregory S
2010-01-01
Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%.
An Optimization Model of Tunnel Support Parameters
Directory of Open Access Journals (Sweden)
Su Lijuan
2015-05-01
Full Text Available An optimization model was developed to obtain the ideal values of the primary support parameters of tunnels, which are wide-ranging in high-speed railway design codes when the surrounding rocks are at the III, IV, and V levels. First, several sets of experiments were designed and simulated using the FLAC3D software under an orthogonal experimental design. Six factors, namely, level of surrounding rock, buried depth of tunnel, lateral pressure coefficient, anchor spacing, anchor length, and shotcrete thickness, were considered. Second, a regression equation was generated by conducting a multiple linear regression analysis following the analysis of the simulation results. Finally, the optimization model of support parameters was obtained by solving the regression equation using the least squares method. In practical projects, the optimized values of support parameters could be obtained by integrating known parameters into the proposed model. In this work, the proposed model was verified on the basis of the Liuyang River Tunnel Project. Results show that the optimization model significantly reduces related costs. The proposed model can also be used as a reliable reference for other high-speed railway tunnels.
Effects of model input data uncertainty in simulating water resources of a transnational catchment
Camargos, Carla; Breuer, Lutz
2016-04-01
Landscape consists of different ecosystem components and how these components affect water quantity and quality need to be understood. We start from the assumption that water resources are generated in landscapes and that rural land use (particular agriculture) has a strong impact on water resources that are used downstream for domestic and industrial supply. Partly located in the north of Luxembourg and partly in the southeast of Belgium, the Haute-Sûre catchment is about 943 km2. As part of the catchment, the Haute-Sûre Lake is an important source of drinking water for Luxembourg population, satisfying 30% of the city's demand. The objective of this study is investigate impact of spatial input data uncertainty on water resources simulations for the Haute-Sûre catchment. We apply the SWAT model for the period 2006 to 2012 and use a variety of digital information on soils, elevation and land uses with various spatial resolutions. Several objective functions are being evaluated and we consider resulting parameter uncertainty to quantify an important part of the global uncertainty in model simulations.
More Efficient Bayesian-based Optimization and Uncertainty Assessment of Hydrologic Model Parameters
2012-02-01
is more objective, repeatable, and better capitalizes on the computational capacity of the modern computer) is an active area of research and...existence of multiple local optima , non-smooth objective function surfaces, and long valleys in parameter space that are a result of excessive parameter...outputs, structural aspects of the model, as well as its input dataset, model parameters that are adjustable through the calibration process, and the
Input-to-output transformation in a model of the rat hippocampal CA1 network
Olypher, Andrey V; Lytton, William W; Prinz, Astrid A.
2012-01-01
Here we use computational modeling to gain new insights into the transformation of inputs in hippocampal field CA1. We considered input-output transformation in CA1 principal cells of the rat hippocampus, with activity synchronized by population gamma oscillations. Prior experiments have shown that such synchronization is especially strong for cells within one millimeter of each other. We therefore simulated a one-millimeter patch of CA1 with 23,500 principal cells. We used morphologically an...
Regional Input Output Models and the FLQ Formula: A Case Study of Finland
Tony Flegg; Paul White
2008-01-01
This paper examines the use of location quotients (LQs) in constructing regional input-output models. Its focus is on the augmented FLQ formula (AFLQ) proposed by Flegg and Webber, 2000, which takes regional specialization explicitly into account. In our case study, we examine data for 20 Finnish regions, ranging in size from very small to very large, in order to assess the relative performance of the AFLQ formula in estimating regional imports, total intermediate inputs and output multiplier...
Interregional spillovers in Spain: an estimation using an interregional input-output model
Llano, Carlos
2009-01-01
In this note we introduce the 1995 Spanish Interregional Input-Output Model, which was estimated using a wide set of One-region input-output tables and interregional trade matrices, estimated for each sector using interregional transport flows. Based on this framework, and by means of the Hypothetical Regional Extraction Method, the interregional backward and feedback effects are computed, capturing the pull effect of every region over the rest of Spain, through their sectoral relations withi...
Analysis of Modeling Parameters on Threaded Screws.
Energy Technology Data Exchange (ETDEWEB)
Vigil, Miquela S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brake, Matthew Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vangoethem, Douglas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-06-01
Assembled mechanical systems often contain a large number of bolted connections. These bolted connections (joints) are integral aspects of the load path for structural dynamics, and, consequently, are paramount for calculating a structure's stiffness and energy dissipation prop- erties. However, analysts have not found the optimal method to model appropriately these bolted joints. The complexity of the screw geometry cause issues when generating a mesh of the model. This paper will explore different approaches to model a screw-substrate connec- tion. Model parameters such as mesh continuity, node alignment, wedge angles, and thread to body element size ratios are examined. The results of this study will give analysts a better understanding of the influences of these parameters and will aide in finding the optimal method to model bolted connections.
Comparison of Soft Computing Techniques for Modelling of the EDM Performance Parameters
Directory of Open Access Journals (Sweden)
M. V. Cakir
2013-01-01
Full Text Available Selection of appropriate operating conditions is an important attribute to pay attention for in electrical discharge machining (EDM of steel parts. The achievement of EDM process is affected by many input parameters; therefore, the computational relations between the output responses and controllable input parameters must be known. However, the proper selection of these parameters is a complex task and it is generally made with the help of sophisticated numerical models. This study investigates the capacity of Adaptive Nero-Fuzzy Inference System (ANFIS, genetic expression programming (GEP and artificial neural networks (ANN in the prediction of EDM performance parameters. The datasets used in modelling study were taken from experimental study. According to the results of estimating the parameters of all models in the comparison in terms of statistical performance is sufficient, but observed that ANFIS model is slightly better than the other models.
Bates, P. D.; Neal, J. C.; Fewtrell, T. J.
2012-12-01
In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound
Directory of Open Access Journals (Sweden)
Majid Namdari
2011-05-01
Full Text Available This study examines energy consumption of inputs and output used in mandarin production, and to find relationship between energy inputs and yield in Mazandaran, Iran. Also the Marginal Physical Product (MPP method was used to analyze the sensitivity of energy inputs on mandarin yield and returns to scale of econometric model was calculated. For this purpose, the data were collected from 110 mandarin orchards which were selected based on random sampling method. The results indicated that total energy inputs were 77501.17 MJ/ha. The energy use efficiency, energy productivity and net energy of mandarin production were found as 0.77, 0.41 kg/MJ and -17651.17 MJ/ha. About 41% of the total energy inputs used in mandarin production was indirect while about 59% was direct. Econometric estimation results revealed that the impact of human labor energy (0.37 was found the highest among the other inputs in mandarin production. The results also showed that direct, indirect and renewable and non-renewable, energy forms had a positive and statistically significant impact on output level. The results of sensitivity analysis of the energy inputs showed that with an additional use of 1 MJ of each of the human labor, farmyard manure and chemical fertilizers energy would lead to an increase in yield by 2.05, 1.80 and 1.26 kg, respectively. The results also showed that the MPP value of direct and renewable energy were higher.
Yi, S; Oemler, A E; Yi, Sukyoung; Demarque, Pierre; Oemler, Augustus
1997-01-01
We present models of the late stages of stellar evolution intended to explain the UV upturn phenomenon in elliptical galaxies. Such models are sensitive to values of a number of poorly-constrained physical parameters, including metallicity, age, stellar mass loss, helium enrichment, and the distribution of stars on the zero age horizontal branch (HB). We explore the sensitivity of the results to values of these parameters, and reach the following conclusions. Old, metal rich galaxies, such as giant ellipticals, naturally develop a UV upturn within a reasonable time scale - less than a Hubble time - without the presence of young stars. The most likely stars to dominate the UV flux of such populations are low mass, core helium burning (HB and evolved HB) stars. Metal-poor populations produce a higher ratio of UV-to-V flux, due to opacity effects, but only metal-rich stars develop a UV upturn, in which the flux increases towards shorter UV wavelengths. Model color-magnitude diagrams and corresponding integrated ...
Wind Farm Flow Modeling using an Input-Output Reduced-Order Model
Energy Technology Data Exchange (ETDEWEB)
Annoni, Jennifer; Gebraad, Pieter; Seiler, Peter
2016-08-01
Wind turbines in a wind farm operate individually to maximize their own power regardless of the impact of aerodynamic interactions on neighboring turbines. There is the potential to increase power and reduce overall structural loads by properly coordinating turbines. To perform control design and analysis, a model needs to be of low computational cost, but retains the necessary dynamics seen in high-fidelity models. The objective of this work is to obtain a reduced-order model that represents the full-order flow computed using a high-fidelity model. A variety of methods, including proper orthogonal decomposition and dynamic mode decomposition, can be used to extract the dominant flow structures and obtain a reduced-order model. In this paper, we combine proper orthogonal decomposition with a system identification technique to produce an input-output reduced-order model. This technique is used to construct a reduced-order model of the flow within a two-turbine array computed using a large-eddy simulation.
Multi input single output model predictive control of non-linear bio-polymerization process
Energy Technology Data Exchange (ETDEWEB)
Arumugasamy, Senthil Kumar; Ahmad, Z. [School of Chemical Engineering, Univerisiti Sains Malaysia, Engineering Campus, Seri Ampangan,14300 Nibong Tebal, Seberang Perai Selatan, Pulau Pinang (Malaysia)
2015-05-15
This paper focuses on Multi Input Single Output (MISO) Model Predictive Control of bio-polymerization process in which mechanistic model is developed and linked with the feedforward neural network model to obtain a hybrid model (Mechanistic-FANN) of lipase-catalyzed ring-opening polymerization of ε-caprolactone (ε-CL) for Poly (ε-caprolactone) production. In this research, state space model was used, in which the input to the model were the reactor temperatures and reactor impeller speeds and the output were the molecular weight of polymer (M{sub n}) and polymer polydispersity index. State space model for MISO created using System identification tool box of Matlab™. This state space model is used in MISO MPC. Model predictive control (MPC) has been applied to predict the molecular weight of the biopolymer and consequently control the molecular weight of biopolymer. The result shows that MPC is able to track reference trajectory and give optimum movement of manipulated variable.
The Lund Model at Nonzero Impact Parameter
Janik, R A; Janik, Romuald A.; Peschanski, Robi
2003-01-01
We extend the formulation of the longitudinal 1+1 dimensional Lund model to nonzero impact parameter using the minimal area assumption. Complete formulae for the string breaking probability and the momenta of the produced mesons are derived using the string worldsheet Minkowskian helicoid geometry. For strings stretched into the transverse dimension, we find probability distribution with slope linear in m_T similar to the statistical models but without any thermalization assumptions.
IMPROVEMENT OF FLUID PIPE LUMPED PARAMETER MODEL
Institute of Scientific and Technical Information of China (English)
Kong Xiaowu; Wei Jianhua; Qiu Minxiu; Wu Genmao
2004-01-01
The traditional lumped parameter model of fluid pipe is introduced and its drawbacks are pointed out.Furthermore, two suggestions are put forward to remove these drawbacks.Firstly, the structure of equivalent circuit is modified, and then the evaluation of equivalent fluid resistance is change to take the frequency-dependent friction into account.Both simulation and experiment prove that this model is precise to characterize the dynamic behaviors of fluid in pipe.
Input-to-output transformation in a model of the rat hippocampal CA1 network.
Olypher, Andrey V; Lytton, William W; Prinz, Astrid A
2012-01-01
Here we use computational modeling to gain new insights into the transformation of inputs in hippocampal field CA1. We considered input-output transformation in CA1 principal cells of the rat hippocampus, with activity synchronized by population gamma oscillations. Prior experiments have shown that such synchronization is especially strong for cells within one millimeter of each other. We therefore simulated a one-millimeter ıt patch of CA1 with 23,500 principal cells. We used morphologically and biophysically detailed neuronal models, each with more than 1000 compartments and thousands of synaptic inputs. Inputs came from binary patterns of spiking neurons from field CA3 and entorhinal cortex (EC). On average, each presynaptic pattern initiated action potentials in the same number of CA1 principal cells in the patch. We considered pairs of similar and pairs of distinct patterns. In all the cases CA1 strongly separated input patterns. However, CA1 cells were considerably more sensitive to small alterations in EC patterns compared to CA3 patterns. Our results can be used for comparison of input-to-output transformations in normal and pathological hippocampal networks.
Modeling the short-run effect of fiscal stimuli on GDP : A new semi-closed input-output model
Chen, Quanrun; Dietzenbacher, Erik; Los, Bart; Yang, Cuihong
2016-01-01
In this study, we propose a new semi-closed input-output model, which reconciles input-output analysis with modern consumption theories. It can simulate changes in household consumption behavior when exogenous stimulus policies lead to higher disposable income levels. It is useful for quantifying
Kelsey, Kathleen Dodge; Pense, Seburn L.
2001-01-01
A model for collecting and using stakeholder input on research priorities is a modification of Guba and Lincoln's model, involving preevaluation preparation, stakeholder identification, information gathering and analysis, interpretive filtering, and negotiation and consensus. A case study at Oklahoma State University illustrates its applicability…
Improving the Performance of Water Demand Forecasting Models by Using Weather Input
Bakker, M.; Van Duist, H.; Van Schagen, K.; Vreeburg, J.; Rietveld, L.
2014-01-01
Literature shows that water demand forecasting models which use water demand as single input, are capable of generating a fairly accurate forecast. However, at changing weather conditions the forecasting errors are quite large. In this paper three different forecasting models are studied: an Adaptiv
Improving the Performance of Water Demand Forecasting Models by Using Weather Input
Bakker, M.; Van Duist, H.; Van Schagen, K.; Vreeburg, J.; Rietveld, L.
2014-01-01
Literature shows that water demand forecasting models which use water demand as single input, are capable of generating a fairly accurate forecast. However, at changing weather conditions the forecasting errors are quite large. In this paper three different forecasting models are studied: an Adaptiv
DEFF Research Database (Denmark)
Sin, Gürkan; Gernaey, Krist; Eliasson Lantz, Anna
2009-01-01
The uncertainty and sensitivity analysis are evaluated for their usefulness as part of the model-building within Process Analytical Technology applications. A mechanistic model describing a batch cultivation of Streptomyces coelicolor for antibiotic production was used as case study. The input...
Consistent Stochastic Modelling of Meteocean Design Parameters
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Sterndorff, M. J.
2000-01-01
Consistent stochastic models of metocean design parameters and their directional dependencies are essential for reliability assessment of offshore structures. In this paper a stochastic model for the annual maximum values of the significant wave height, and the associated wind velocity, current...... velocity, and water level is presented. The stochastic model includes statistical uncertainty and dependency between the four stochastic variables. Further, a new stochastic model for annual maximum directional significant wave heights is presented. The model includes dependency between the maximum wave...... height from neighboring directional sectors. Numerical examples are presented where the models are calibrated using the Maximum Likelihood method to data from the central part of the North Sea. The calibration of the directional distributions is made such that the stochastic model for the omnidirectional...
Development of an Input Model to MELCOR 1.8.5 for the Oskarshamn 3 BWR
Energy Technology Data Exchange (ETDEWEB)
Nilsson, Lars [Lentek, Nykoeping (Sweden)
2006-05-15
An input model has been prepared to the code MELCOR 1.8.5 for the Swedish Oskarshamn 3 Boiling Water Reactor (O3). This report describes the modelling work and the various files which comprise the input deck. Input data are mainly based on original drawings and system descriptions made available by courtesy of OKG AB. Comparison and check of some primary system data were made against an O3 input file to the SCDAP/RELAP5 code that was used in the SARA project. Useful information was also obtained from the FSAR (Final Safety Analysis Report) for O3 and the SKI report '2003 Stoerningshandboken BWR'. The input models the O3 reactor at its current state with the operating power of 3300 MW{sub th}. One aim with this work is that the MELCOR input could also be used for power upgrading studies. All fuel assemblies are thus assumed to consist of the new Westinghouse-Atom's SVEA-96 Optima2 fuel. MELCOR is a severe accident code developed by Sandia National Laboratory under contract from the U.S. Nuclear Regulatory Commission (NRC). MELCOR is a successor to STCP (Source Term Code Package) and has thus a long evolutionary history. The input described here is adapted to the latest version 1.8.5 available when the work began. It was released the year 2000, but a new version 1.8.6 was distributed recently. Conversion to the new version is recommended. (During the writing of this report still another code version, MELCOR 2.0, has been announced to be released within short.) In version 1.8.5 there is an option to describe the accident progression in the lower plenum and the melt-through of the reactor vessel bottom in more detail by use of the Bottom Head (BH) package developed by Oak Ridge National Laboratory especially for BWRs. This is in addition to the ordinary MELCOR COR package. Since problems arose running with the BH input two versions of the O3 input deck were produced, a NONBH and a BH deck. The BH package is no longer a separate package in the new 1
GEN-IV BENCHMARKING OF TRISO FUEL PERFORMANCE MODELS UNDER ACCIDENT CONDITIONS MODELING INPUT DATA
Energy Technology Data Exchange (ETDEWEB)
Collin, Blaise Paul [Idaho National Laboratory
2016-09-01
This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. • The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read
A seismic free field input model for FE-SBFE coupling in time domain
Institute of Scientific and Technical Information of China (English)
阎俊义; 金峰; 徐艳杰; 王光纶; 张楚汉
2003-01-01
A seismic free field input formulation of the coupling procedure of the finite element (FE) and the scaled boundary finite-element(SBFE) is proposed to perform the unbounded soil-structure interaction analysis in time domain. Based on the substructure technique, seismic excitation of the soil-structure system is represented by the free-field motion of an elastic half-space. To reduce the computational effort, the acceleration unit-impulse response function of the unbounded soil is decomposed into two functions: linear and residual. The latter converges to zero and can be truncated as required. With the prescribed tolerance parameter, the balance between accuracy and efficiency of the procedure can be controlled. The validity of the model is verified by the scattering analysis of a hemi-spherical canyon subjected to plane harmonic P, SV and SH wave incidence. Numerical results show that the new procedure is very efficient for seismic problems within a normal range of frequency. The coupling procedure presented herein can be applied to linear and nonlinear earthquake response analysis of practical structures which are built on unbounded soil.
Order Parameters of the Dilute A Models
Warnaar, S O; Seaton, K A; Nienhuis, B
1993-01-01
The free energy and local height probabilities of the dilute A models with broken $\\Integer_2$ symmetry are calculated analytically using inversion and corner transfer matrix methods. These models possess four critical branches. The first two branches provide new realisations of the unitary minimal series and the other two branches give a direct product of this series with an Ising model. We identify the integrable perturbations which move the dilute A models away from the critical limit. Generalised order parameters are defined and their critical exponents extracted. The associated conformal weights are found to occur on the diagonal of the relevant Kac table. In an appropriate regime the dilute A$_3$ model lies in the universality class of the Ising model in a magnetic field. In this case we obtain the magnetic exponent $\\delta=15$ directly, without the use of scaling relations.
Estimation of Soil Carbon Input in France: An Inverse Modelling Approach
Institute of Scientific and Technical Information of China (English)
J.MEERSMANS; M.P.MARTIN; E.LACARCE; T.G.ORTON; S.DE BAETS; M.GOURRAT; N.P.A.SABY
2013-01-01
Development of a quantitative understanding of soil organic carbon (SOC) dynamics is vital for management of soil to sequester carbon (C) and maintain fertility,thereby contributing to food security and climate change mitigation.There are well-established process-based models that can be used to simulate SOC stock evolution; however,there are few plant residue C input values and those that exist represent a limited range of environments.This limitation in a fundamental model component (i.e.,C input) constrains the reliability of current SOC stock simulations.This study aimed to estimate crop-specific and environment-specific plant-derived soil C input values for agricultural sites in Prance based on data from 700 sites selected from a recently established French soil monitoring network (the RMQS database).Measured SOC stock values from this large scale soil database were used to constrain an inverse RothC modelling approach to derive estimated C input values consistent with the stocks.This approach allowed us to estimate significant crop-specific C input values (P ＜ 0.05) for 14 out of 17 crop types in the range from 1.84 ± 0.69 t C ha-1 year-1 (silage corn) to 5.15 ± 0.12 t C ha-1 year-1 (grassland/pasture).Furthermore,the incorporation of climate variables improved the predictions.C input of 4 crop types could be predicted as a function of temperature and 8 as a function of precipitation.This study offered an approach to meet the urgent need for crop-specific and environment-specific C input values in order to improve the reliability of SOC stock prediction.
State-shared model for multiple-input multiple-output systems
Institute of Scientific and Technical Information of China (English)
Zhenhua TIAN; Karlene A. HOO
2005-01-01
This work proposes a method to construct a state-shared model for multiple-input multiple-output (MIMO)systems. A state-shared model is defined as a linear time invariant state-space structure that is driven by measurement signals-the plant outputs and the manipulated variables, but shared by different multiple input/output models. The genesis of the state-shared model is based on a particular reduced non-minimal realization. Any such realization necessarily fulfills the requirement that the output of the state-shared model is an asymptotically correct estimate of the output of the plant, if the process model is selected appropriately. The approach is demonstrated on a nonlinear MIMO system- a physiological model of calcium fluxes that controls muscle contraction and relaxation in human cardiac myocytes.
Testing Linear Models for Ability Parameters in Item Response Models
Glas, Cees A.W.; Hendrawan, Irene
2005-01-01
Methods for testing hypotheses concerning the regression parameters in linear models for the latent person parameters in item response models are presented. Three tests are outlined: A likelihood ratio test, a Lagrange multiplier test and a Wald test. The tests are derived in a marginal maximum like
Analysis of MODIS snow cover time series over the alpine regions as input for hydrological modeling
Notarnicola, Claudia; Rastner, Philipp; Irsara, Luca; Moelg, Nico; Bertoldi, Giacomo; Dalla Chiesa, Stefano; Endrizzi, Stefano; Zebisch, Marc
2010-05-01
Snow extent and relative physical properties are key parameters in hydrology, weather forecast and hazard warning as well as in climatological models. Satellite sensors offer a unique advantage in monitoring snow cover due to their temporal and spatial synoptic view. The Moderate Resolution Imaging Spectrometer (MODIS) from NASA is especially useful for this purpose due to its high frequency. However, in order to evaluate the role of snow on the water cycle of a catchment such as runoff generation due to snowmelt, remote sensing data need to be assimilated in hydrological models. This study presents a comparison on a multi-temporal basis between snow cover data derived from (1) MODIS images, (2) LANDSAT images, and (3) predictions by the hydrological model GEOtop [1,3]. The test area is located in the catchment of the Matscher Valley (South Tyrol, Northern Italy). The snow cover maps derived from MODIS-images are obtained using a newly developed algorithm taking into account the specific requirements of mountain regions with a focus on the Alps [2]. This algorithm requires the standard MODIS-products MOD09 and MOD02 as input data and generates snow cover maps at a spatial resolution of 250 m. The final output is a combination of MODIS AQUA and MODIS TERRA snow cover maps, thus reducing the presence of cloudy pixels and no-data-values due to topography. By using these maps, daily time series starting from the winter season (November - May) 2002 till 2008/2009 have been created. Along with snow maps from MODIS images, also some snow cover maps derived from LANDSAT images have been used. Due to their high resolution (manto nevoso in aree alpine con dati MODIS multi-temporali e modelli idrologici, 13th ASITA National Conference, 1-4.12.2009, Bari, Italy. [3] Zanotti F., Endrizzi S., Bertoldi G. and Rigon R. 2004. The GEOtop snow module. Hydrological Processes, 18: 3667-3679. DOI:10.1002/hyp.5794.
Hydrological model parameter dimensionality is a weak measure of prediction uncertainty
Directory of Open Access Journals (Sweden)
S. Pande
2015-04-01
Full Text Available This paper shows that instability of hydrological system representation in response to different pieces of information and associated prediction uncertainty is a function of model complexity. After demonstrating the connection between unstable model representation and model complexity, complexity is analyzed in a step by step manner. This is done measuring differences between simulations of a model under different realizations of input forcings. Algorithms are then suggested to estimate model complexity. Model complexities of the two model structures, SAC-SMA (Sacramento Soil Moisture Accounting and its simplified version SIXPAR (Six Parameter Model, are computed on resampled input data sets from basins that span across the continental US. The model complexities for SIXPAR are estimated for various parameter ranges. It is shown that complexity of SIXPAR increases with lower storage capacity and/or higher recession coefficients. Thus it is argued that a conceptually simple model structure, such as SIXPAR, can be more complex than an intuitively more complex model structure, such as SAC-SMA for certain parameter ranges. We therefore contend that magnitudes of feasible model parameters influence the complexity of the model selection problem just as parameter dimensionality (number of parameters does and that parameter dimensionality is an incomplete indicator of stability of hydrological model selection and prediction problems.
Recurrent network models for perfect temporal integration of fluctuating correlated inputs.
Directory of Open Access Journals (Sweden)
Hiroshi Okamoto
2009-06-01
Full Text Available Temporal integration of input is essential to the accumulation of information in various cognitive and behavioral processes, and gradually increasing neuronal activity, typically occurring within a range of seconds, is considered to reflect such computation by the brain. Some psychological evidence suggests that temporal integration by the brain is nearly perfect, that is, the integration is non-leaky, and the output of a neural integrator is accurately proportional to the strength of input. Neural mechanisms of perfect temporal integration, however, remain largely unknown. Here, we propose a recurrent network model of cortical neurons that perfectly integrates partially correlated, irregular input spike trains. We demonstrate that the rate of this temporal integration changes proportionately to the probability of spike coincidences in synaptic inputs. We analytically prove that this highly accurate integration of synaptic inputs emerges from integration of the variance of the fluctuating synaptic inputs, when their mean component is kept constant. Highly irregular neuronal firing and spike coincidences are the major features of cortical activity, but they have been separately addressed so far. Our results suggest that the efficient protocol of information integration by cortical networks essentially requires both features and hence is heterotic.
Zayane, Chadia
2014-06-01
In this paper, we address a special case of state and parameter estimation, where the system can be put on a cascade form allowing to estimate the state components and the set of unknown parameters separately. Inspired by the nonlinear Balloon hemodynamic model for functional Magnetic Resonance Imaging problem, we propose a hierarchical approach. The system is divided into two subsystems in cascade. The state and input are first estimated from a noisy measured signal using an adaptive observer. The obtained input is then used to estimate the parameters of a linear system using the modulating functions method. Some numerical results are presented to illustrate the efficiency of the proposed method.
Modelling spin Hamiltonian parameters of molecular nanomagnets.
Gupta, Tulika; Rajaraman, Gopalan
2016-07-12
Molecular nanomagnets encompass a wide range of coordination complexes possessing several potential applications. A formidable challenge in realizing these potential applications lies in controlling the magnetic properties of these clusters. Microscopic spin Hamiltonian (SH) parameters describe the magnetic properties of these clusters, and viable ways to control these SH parameters are highly desirable. Computational tools play a proactive role in this area, where SH parameters such as isotropic exchange interaction (J), anisotropic exchange interaction (Jx, Jy, Jz), double exchange interaction (B), zero-field splitting parameters (D, E) and g-tensors can be computed reliably using X-ray structures. In this feature article, we have attempted to provide a holistic view of the modelling of these SH parameters of molecular magnets. The determination of J includes various class of molecules, from di- and polynuclear Mn complexes to the {3d-Gd}, {Gd-Gd} and {Gd-2p} class of complexes. The estimation of anisotropic exchange coupling includes the exchange between an isotropic metal ion and an orbitally degenerate 3d/4d/5d metal ion. The double-exchange section contains some illustrative examples of mixed valance systems, and the section on the estimation of zfs parameters covers some mononuclear transition metal complexes possessing very large axial zfs parameters. The section on the computation of g-anisotropy exclusively covers studies on mononuclear Dy(III) and Er(III) single-ion magnets. The examples depicted in this article clearly illustrate that computational tools not only aid in interpreting and rationalizing the observed magnetic properties but possess the potential to predict new generation MNMs.
Queueing model for an ATM multiplexer with unequal input/output link capacities
Long, Y. H.; Ho, T. K.; Rad, A. B.; Lam, S. P. S.
1998-10-01
We present a queuing model for an ATM multiplexer with unequal input/output link capacities in this paper. This model can be used to analyze the buffer behaviors of an ATM multiplexer which multiplexes low speed input links into a high speed output link. For this queuing mode, we assume that the input and output slot times are not equal, this is quite different from most analysis of discrete-time queues for ATM multiplexer/switch. In the queuing analysis, we adopt a correlated arrival process represented by the Discrete-time Batch Markovian Arrival Process. The analysis is based upon M/G/1 type queue technique which enables easy numerical computation. Queue length distributions observed at different epochs and queue length distribution seen by an arbitrary arrival cell when it enters the buffer are given.
Nonlinear model predictive control using parameter varying BP-ARX combination model
Yang, J.-F.; Xiao, L.-F.; Qian, J.-X.; Li, H.
2012-03-01
A novel back-propagation AutoRegressive with eXternal input (BP-ARX) combination model is constructed for model predictive control (MPC) of MIMO nonlinear systems, whose steady-state relation between inputs and outputs can be obtained. The BP neural network represents the steady-state relation, and the ARX model represents the linear dynamic relation between inputs and outputs of the nonlinear systems. The BP-ARX model is a global model and is identified offline, while the parameters of the ARX model are rescaled online according to BP neural network and operating data. Sequential quadratic programming is employed to solve the quadratic objective function online, and a shift coefficient is defined to constrain the effect time of the recursive least-squares algorithm. Thus, a parameter varying nonlinear MPC (PVNMPC) algorithm that responds quickly to large changes in system set-points and shows good dynamic performance when system outputs approach set-points is proposed. Simulation results in a multivariable stirred tank and a multivariable pH neutralisation process illustrate the applicability of the proposed method and comparisons of the control effect between PVNMPC and multivariable recursive generalised predictive controller are also performed.
Systematic parameter inference in stochastic mesoscopic modeling
Lei, Huan; Li, Zhen; Karniadakis, George
2016-01-01
We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are sparse. The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space....
Determination of the Parameter Sets for the Best Performance of IPS-driven ENLIL Model
Yun, Jongyeon; Choi, Kyu-Cheol; Yi, Jonghyuk; Kim, Jaehun; Odstrcil, Dusan
2016-12-01
Interplanetary scintillation-driven (IPS-driven) ENLIL model was jointly developed by University of California, San Diego (UCSD) and National Aeronaucics and Space Administration/Goddard Space Flight Center (NASA/GSFC). The model has been in operation by Korean Space Weather Cetner (KSWC) since 2014. IPS-driven ENLIL model has a variety of ambient solar wind parameters and the results of the model depend on the combination of these parameters. We have conducted researches to determine the best combination of parameters to improve the performance of the IPS-driven ENLIL model. The model results with input of 1,440 combinations of parameters are compared with the Advanced Composition Explorer (ACE) observation data. In this way, the top 10 parameter sets showing best performance were determined. Finally, the characteristics of the parameter sets were analyzed and application of the results to IPS-driven ENLIL model was discussed.
Modelling tourists arrival using time varying parameter
Suciptawati, P.; Sukarsa, K. G.; Kencana, Eka N.
2017-06-01
The importance of tourism and its related sectors to support economic development and poverty reduction in many countries increase researchers’ attentions to study and model tourists’ arrival. This work is aimed to demonstrate time varying parameter (TVP) technique to model the arrival of Korean’s tourists to Bali. The number of Korean tourists whom visiting Bali for period January 2010 to December 2015 were used to model the number of Korean’s tourists to Bali (KOR) as dependent variable. The predictors are the exchange rate of Won to IDR (WON), the inflation rate in Korea (INFKR), and the inflation rate in Indonesia (INFID). Observing tourists visit to Bali tend to fluctuate by their nationality, then the model was built by applying TVP and its parameters were approximated using Kalman Filter algorithm. The results showed all of predictor variables (WON, INFKR, INFID) significantly affect KOR. For in-sample and out-of-sample forecast with ARIMA’s forecasted values for the predictors, TVP model gave mean absolute percentage error (MAPE) as much as 11.24 percent and 12.86 percent, respectively.
Green Input-Output Model for Power Company Theoretical & Application Analysis
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Based on the theory of marginal opportunity cost, one kind of green input-output table and models of powercompany are put forward in this paper. For an appliable purpose, analysis of integrated planning, cost analysis, pricingof the power company are also given.
The economic impact of multifunctional agriculture in Dutch regions: An input-output model
Heringa, P.W.; Heide, van der C.M.; Heijman, W.J.M.
2013-01-01
Multifunctional agriculture is a broad concept lacking a precise definition. Moreover, little is known about the societal importance of multifunctional agriculture. This paper is an empirical attempt to fill this gap. To this end, an input-output model was constructed for multifunctional agriculture
The economic impact of multifunctional agriculture in The Netherlands: A regional input-output model
Heringa, P.W.; Heide, van der C.M.; Heijman, W.J.M.
2012-01-01
Multifunctional agriculture is a broad concept lacking a precise and uniform definition. Moreover, little is known about the societal importance of multifunctional agriculture. This paper is an empirical attempt to fill this gap. To this end, an input-output model is constructed for multifunctional
Characteristic operator functions for quantum input-plant-output models and coherent control
Gough, John E.
2015-01-01
We introduce the characteristic operator as the generalization of the usual concept of a transfer function of linear input-plant-output systems to arbitrary quantum nonlinear Markovian input-output models. This is intended as a tool in the characterization of quantum feedback control systems that fits in with the general theory of networks. The definition exploits the linearity of noise differentials in both the plant Heisenberg equations of motion and the differential form of the input-output relations. Mathematically, the characteristic operator is a matrix of dimension equal to the number of outputs times the number of inputs (which must coincide), but with entries that are operators of the plant system. In this sense, the characteristic operator retains details of the effective plant dynamical structure and is an essentially quantum object. We illustrate the relevance to model reduction and simplification definition by showing that the convergence of the characteristic operator in adiabatic elimination limit models requires the same conditions and assumptions appearing in the work on limit quantum stochastic differential theorems of Bouten and Silberfarb [Commun. Math. Phys. 283, 491-505 (2008)]. This approach also shows in a natural way that the limit coefficients of the quantum stochastic differential equations in adiabatic elimination problems arise algebraically as Schur complements and amounts to a model reduction where the fast degrees of freedom are decoupled from the slow ones and eliminated.
Using a Joint-Input, Multi-Product Formulation to Improve Spatial Price Equilibrium Models
Bishop, Phillip M.; Pratt, James E.; Novakovic, Andrew M.
1994-01-01
Mathematical programming models, as typically formulated for international trade applications, may contain certain implied restrictions which lead to solutions which can be shown to be technically infeasible, or if feasible, then not actually an equilibrium. An alternative formulation is presented which allows joint-inputs and multi-products, with pure transshipment and product substitution forms of arbitrage.
Input parameters and scenarios, including economic inputs
DEFF Research Database (Denmark)
Boklund, Anette; Hisham Beshara Halasa, Tariq
2012-01-01
or to the abattoir, was calculated as the sum of all registered movements off the herd in the period from October 1, 2006 to September 30, 2007 divided by 365. Swine movements originated from the Movement database for swine and cattle and sheep movements from the Danish Cattle database. From an infected herd...... place, a receiving herd needed to be found. The distance, in which the receiving herd should be found, was calculated from movement data for animals and from data from trucks and abattoirs for movements to slaughter and milk tankers. For persons visiting herds, we used a combination of expert opinions...... the zone, and second 21 days later. Sheep within the zone were simulated to be tested. Within the surveillance zone, all herds were simulated to be clinically surveyed within 7 days, and sheep within the zone were simulated to be tested within 7 days and again before lifting the zone. Herds, which had...
A neuromorphic model of motor overflow in focal hand dystonia due to correlated sensory input
Sohn, Won Joon; Niu, Chuanxin M.; Sanger, Terence D.
2016-10-01
Objective. Motor overflow is a common and frustrating symptom of dystonia, manifested as unintentional muscle contraction that occurs during an intended voluntary movement. Although it is suspected that motor overflow is due to cortical disorganization in some types of dystonia (e.g. focal hand dystonia), it remains elusive which mechanisms could initiate and, more importantly, perpetuate motor overflow. We hypothesize that distinct motor elements have low risk of motor overflow if their sensory inputs remain statistically independent. But when provided with correlated sensory inputs, pre-existing crosstalk among sensory projections will grow under spike-timing-dependent-plasticity (STDP) and eventually produce irreversible motor overflow. Approach. We emulated a simplified neuromuscular system comprising two anatomically distinct digital muscles innervated by two layers of spiking neurons with STDP. The synaptic connections between layers included crosstalk connections. The input neurons received either independent or correlated sensory drive during 4 days of continuous excitation. The emulation is critically enabled and accelerated by our neuromorphic hardware created in previous work. Main results. When driven by correlated sensory inputs, the crosstalk synapses gained weight and produced prominent motor overflow; the growth of crosstalk synapses resulted in enlarged sensory representation reflecting cortical reorganization. The overflow failed to recede when the inputs resumed their original uncorrelated statistics. In the control group, no motor overflow was observed. Significance. Although our model is a highly simplified and limited representation of the human sensorimotor system, it allows us to explain how correlated sensory input to anatomically distinct muscles is by itself sufficient to cause persistent and irreversible motor overflow. Further studies are needed to locate the source of correlation in sensory input.
Linear and quadratic models of point process systems: contributions of patterned input to output.
Lindsay, K A; Rosenberg, J R
2012-08-01
In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike.
Miyazaki, Shohei; Yamazaki, Youichi; Murase, Kenya
2008-11-01
We performed an error analysis of the quantification of liver perfusion from dynamic contrast-enhanced computed tomography (DCE-CT) data using a dual-input single-compartment model for various disease severities, based on computer simulations. In the simulations, the time-density curves (TDCs) in the liver were generated from an actually measured arterial input function using a theoretical equation describing the kinetic behavior of the contrast agent (CA) in the liver. The rate constants for the transfer of CA from the hepatic artery to the liver (K1a), from the portal vein to the liver (K1p), and from the liver to the plasma (k2) were estimated from simulated TDCs with various plasma volumes (V0s). To investigate the effect of the shapes of input functions, the original arterial and portal-venous input functions were stretched in the time direction by factors of 2, 3 and 4 (stretching factors). The above parameters were estimated with the linear least-squares (LLSQ) and nonlinear least-squares (NLSQ) methods, and the root mean square errors (RMSEs) between the true and estimated values were calculated. Sensitivity and identifiability analyses were also performed. The RMSE of V0 was the smallest, followed by those of K1a, k2 and K1p in an increasing order. The RMSEs of K1a, K1p and k2 increased with increasing V0, while that of V0 tended to decrease. The stretching factor also affected parameter estimation in both methods. The LLSQ method estimated the above parameters faster and with smaller variations than the NLSQ method. Sensitivity analysis showed that the magnitude of the sensitivity function of V0 was the greatest, followed by those of K1a, K1p and k2 in a decreasing order, while the variance of V0 obtained from the covariance matrices was the smallest, followed by those of K1a, K1p and k2 in an increasing order. The magnitude of the sensitivity function and the variance increased and decreased, respectively, with increasing disease severity and decreased
Resonance model for non-perturbative inputs to gluon distributions in the hadrons
Ermolaev, B I; Troyan, S I
2015-01-01
We construct non-perturbative inputs for the elastic gluon-hadron scattering amplitudes in the forward kinematic region for both polarized and non-polarized hadrons. We use the optical theorem to relate invariant scattering amplitudes to the gluon distributions in the hadrons. By analyzing the structure of the UV and IR divergences, we can determine theoretical conditions on the non-perturbative inputs, and use these to construct the results in a generalized Basic Factorization framework using a simple Resonance Model. These results can then be related to the K_T and Collinear Factorization expressions, and the corresponding constrains can be extracted.
Prasad, Kanchan; Gorai, Amit Kumar; Goyal, Pramila
2016-10-01
In the paper entitled "Development of ANFIS model for air quality forecasting and input optimization for reducing the computational cost and time" the correlation coefficient values of O3 with the other parameters (shown in Table 4) were mistakenly written from some other results. But, the analyses were done based on the actual results. The actual values are listed in the revised Table 4.
Energy Technology Data Exchange (ETDEWEB)
Collin, Blaise P. [Idaho National Laboratory (INL), Idaho Falls, ID (United States)
2014-09-01
This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison
Energy Technology Data Exchange (ETDEWEB)
Collin, Blaise P. [Idaho National Laboratory (INL), Idaho Falls, ID (United States)
2014-09-01
This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison
Translating landfill methane generation parameters among first-order decay models.
Krause, Max J; Chickering, Giles W; Townsend, Timothy G
2016-11-01
Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L0) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (kc) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first
Parameter estimation, model reduction and quantum filtering
Chase, Bradley A.
This thesis explores the topics of parameter estimation and model reduction in the context of quantum filtering. The last is a mathematically rigorous formulation of continuous quantum measurement, in which a stream of auxiliary quantum systems is used to infer the state of a target quantum system. Fundamental quantum uncertainties appear as noise which corrupts the probe observations and therefore must be filtered in order to extract information about the target system. This is analogous to the classical filtering problem in which techniques of inference are used to process noisy observations of a system in order to estimate its state. Given the clear similarities between the two filtering problems, I devote the beginning of this thesis to a review of classical and quantum probability theory, stochastic calculus and filtering. This allows for a mathematically rigorous and technically adroit presentation of the quantum filtering problem and solution. Given this foundation, I next consider the related problem of quantum parameter estimation, in which one seeks to infer the strength of a parameter that drives the evolution of a probe quantum system. By embedding this problem in the state estimation problem solved by the quantum filter, I present the optimal Bayesian estimator for a parameter when given continuous measurements of the probe system to which it couples. For cases when the probe takes on a finite number of values, I review a set of sufficient conditions for asymptotic convergence of the estimator. For a continuous-valued parameter, I present a computational method called quantum particle filtering for practical estimation of the parameter. Using these methods, I then study the particular problem of atomic magnetometry and review an experimental method for potentially reducing the uncertainty in the estimate of the magnetic field beyond the standard quantum limit. The technique involves double-passing a probe laser field through the atomic system, giving
Input-constrained model predictive control via the alternating direction method of multipliers
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Frison, Gianluca; Andersen, Martin S.
2014-01-01
This paper presents an algorithm, based on the alternating direction method of multipliers, for the convex optimal control problem arising in input-constrained model predictive control. We develop an efficient implementation of the algorithm for the extended linear quadratic control problem (LQCP......) with input and input-rate limits. The algorithm alternates between solving an extended LQCP and a highly structured quadratic program. These quadratic programs are solved using a Riccati iteration procedure, and a structure-exploiting interior-point method, respectively. The computational cost per iteration...... is quadratic in the dimensions of the controlled system, and linear in the length of the prediction horizon. Simulations show that the approach proposed in this paper is more than an order of magnitude faster than several state-of-the-art quadratic programming algorithms, and that the difference in computation...
Large uncertainty in soil carbon modelling related to carbon input calculation method
Keel, Sonja G.; Leifeld, Jens; Taghizadeh-Toosi, Arezoo; Oleson, Jørgen E.
2016-04-01
A model-based inventory for carbon (C) sinks and sources in agricultural soils is being established for Switzerland. As part of this project, five frequently used allometric equations that estimate soil C inputs based on measured yields are compared. To evaluate the different methods, we calculate soil C inputs for a long-term field trial in Switzerland. This DOK experiment (bio-Dynamic, bio-Organic, and conventional (German: Konventionell)) compares five different management systems, that are applied to identical crop rotations. Average calculated soil C inputs vary largely between allometric equations and range from 1.6 t C ha-1 yr-1 to 2.6 t C ha-1 yr-1. Among the most important crops in Switzerland, the uncertainty is largest for barley (difference between highest and lowest estimate: 3.0 t C ha-1 yr-1). For the unfertilized control treatment, the estimated soil C inputs vary less between allometric equations than for the treatment that received mineral fertilizer and farmyard manure. Most likely, this is due to the higher yields in the latter treatment, i.e. the difference between methods might be amplified because yields differ more. To evaluate the influence of these allometric equations on soil C dynamics we simulate the DOK trial for the years 1977-2004 using the model C-TOOL (Taghizadeh-Toosi et al. 2014) and the five different soil C input calculation methods. Across all treatments, C-TOOL simulates a decrease in soil C in line with the experimental data. This decline, however, varies between allometric equations (-2.4 t C ha-1 to -6.3 t C ha-1 for the years 1977-2004) and has the same order of magnitude as the difference between treatments. In summary, the method to estimate soil C inputs is identified as a significant source of uncertainty in soil C modelling. Choosing an appropriate allometric equation to derive the input data is thus a critical step when setting up a model-based national soil C inventory. References Taghizadeh-Toosi A et al. (2014) C
DEFF Research Database (Denmark)
Rasmussen, Bjarne D.; Jakobsen, Arne
1999-01-01
instabilities prevent the practical use of such a system model for more than one input/output combination and for other magnitudes of refrigerating capacities.A higher numerical robustness of system models can be achieved by making a model for the refrigeration cycle the core of the system model and by using...... variables with narrow definition intervals for the exchange of information between the cycle model and the component models.The advantages of the cycle-oriented method are illustrated by an example showing the refrigeration cycle similarities between two very different refrigeration systems.......Mathematical models of refrigeration systems are often based on a coupling of component models forming a “closed loop” type of system model. In these models the coupling structure of the component models represents the actual flow path of refrigerant in the system. Very often numerical...
Estimation of sectoral prices in the BNL energy input--output model
Energy Technology Data Exchange (ETDEWEB)
Tessmer, R.G. Jr.; Groncki, P.; Boyce, G.W. Jr.
1977-12-01
Value-added coefficients have been incorporated into Brookhaven's Energy Input-Output Model so that one can calculate the implicit price at which each sector sells its output to interindustry and final-demand purchasers. Certain adjustments to historical 1967 data are required because of the unique structure of the model. Procedures are also described for projecting energy-sector coefficients in future years that are consistent with exogenously specified energy prices.
Global Behaviors of a Chemostat Model with Delayed Nutrient Recycling and Periodically Pulsed Input
Directory of Open Access Journals (Sweden)
Kai Wang
2010-01-01
Full Text Available The dynamic behaviors in a chemostat model with delayed nutrient recycling and periodically pulsed input are studied. By introducing new analysis technique, the sufficient and necessary conditions on the permanence and extinction of the microorganisms are obtained. Furthermore, by using the Liapunov function method, the sufficient condition on the global attractivity of the model is established. Finally, an example is given to demonstrate the effectiveness of the results in this paper.
Berg, Matthew; Hartley, Brian; Richters, Oliver
2015-01-01
By synthesizing stock-flow consistent models, input-output models, and aspects of ecological macroeconomics, a method is developed to simultaneously model monetary flows through the financial system, flows of produced goods and services through the real economy, and flows of physical materials through the natural environment. This paper highlights the linkages between the physical environment and the economic system by emphasizing the role of the energy industry. A conceptual model is developed in general form with an arbitrary number of sectors, while emphasizing connections with the agent-based, econophysics, and complexity economics literature. First, we use the model to challenge claims that 0% interest rates are a necessary condition for a stationary economy and conduct a stability analysis within the parameter space of interest rates and consumption parameters of an economy in stock-flow equilibrium. Second, we analyze the role of energy price shocks in contributing to recessions, incorporating several propagation and amplification mechanisms. Third, implied heat emissions from energy conversion and the effect of anthropogenic heat flux on climate change are considered in light of a minimal single-layer atmosphere climate model, although the model is only implicitly, not explicitly, linked to the economic model.
Regional disaster impact analysis: comparing input-output and computable general equilibrium models
Koks, Elco E.; Carrera, Lorenzo; Jonkeren, Olaf; Aerts, Jeroen C. J. H.; Husby, Trond G.; Thissen, Mark; Standardi, Gabriele; Mysiak, Jaroslav
2016-08-01
A variety of models have been applied to assess the economic losses of disasters, of which the most common ones are input-output (IO) and computable general equilibrium (CGE) models. In addition, an increasing number of scholars have developed hybrid approaches: one that combines both or either of them in combination with noneconomic methods. While both IO and CGE models are widely used, they are mainly compared on theoretical grounds. Few studies have compared disaster impacts of different model types in a systematic way and for the same geographical area, using similar input data. Such a comparison is valuable from both a scientific and policy perspective as the magnitude and the spatial distribution of the estimated losses are born likely to vary with the chosen modelling approach (IO, CGE, or hybrid). Hence, regional disaster impact loss estimates resulting from a range of models facilitate better decisions and policy making. Therefore, this study analyses the economic consequences for a specific case study, using three regional disaster impact models: two hybrid IO models and a CGE model. The case study concerns two flood scenarios in the Po River basin in Italy. Modelling results indicate that the difference in estimated total (national) economic losses and the regional distribution of those losses may vary by up to a factor of 7 between the three models, depending on the type of recovery path. Total economic impact, comprising all Italian regions, is negative in all models though.
Parameter optimization in S-system models
Directory of Open Access Journals (Sweden)
Vasconcelos Ana
2008-04-01
Full Text Available Abstract Background The inverse problem of identifying the topology of biological networks from their time series responses is a cornerstone challenge in systems biology. We tackle this challenge here through the parameterization of S-system models. It was previously shown that parameter identification can be performed as an optimization based on the decoupling of the differential S-system equations, which results in a set of algebraic equations. Results A novel parameterization solution is proposed for the identification of S-system models from time series when no information about the network topology is known. The method is based on eigenvector optimization of a matrix formed from multiple regression equations of the linearized decoupled S-system. Furthermore, the algorithm is extended to the optimization of network topologies with constraints on metabolites and fluxes. These constraints rejoin the system in cases where it had been fragmented by decoupling. We demonstrate with synthetic time series why the algorithm can be expected to converge in most cases. Conclusion A procedure was developed that facilitates automated reverse engineering tasks for biological networks using S-systems. The proposed method of eigenvector optimization constitutes an advancement over S-system parameter identification from time series using a recent method called Alternating Regression. The proposed method overcomes convergence issues encountered in alternate regression by identifying nonlinear constraints that restrict the search space to computationally feasible solutions. Because the parameter identification is still performed for each metabolite separately, the modularity and linear time characteristics of the alternating regression method are preserved. Simulation studies illustrate how the proposed algorithm identifies the correct network topology out of a collection of models which all fit the dynamical time series essentially equally well.
Modeling of Parameters of Subcritical Assembly SAD
Petrochenkov, S; Puzynin, I
2005-01-01
The accepted conceptual design of the experimental Subcritical Assembly in Dubna (SAD) is based on the MOX core with a nominal unit capacity of 25 kW (thermal). This corresponds to the multiplication coefficient $k_{\\rm eff} =0.95$ and accelerator beam power 1 kW. A subcritical assembly driven with the existing 660 MeV proton accelerator at the Joint Institute for Nuclear Research has been modelled in order to make choice of the optimal parameters for the future experiments. The Monte Carlo method was used to simulate neutron spectra, energy deposition and doses calculations. Some of the calculation results are presented in the paper.
Empirically modelled Pc3 activity based on solar wind parameters
Directory of Open Access Journals (Sweden)
T. Raita
2010-09-01
Full Text Available It is known that under certain solar wind (SW/interplanetary magnetic field (IMF conditions (e.g. high SW speed, low cone angle the occurrence of ground-level Pc3–4 pulsations is more likely. In this paper we demonstrate that in the event of anomalously low SW particle density, Pc3 activity is extremely low regardless of otherwise favourable SW speed and cone angle. We re-investigate the SW control of Pc3 pulsation activity through a statistical analysis and two empirical models with emphasis on the influence of SW density on Pc3 activity. We utilise SW and IMF measurements from the OMNI project and ground-based magnetometer measurements from the MM100 array to relate SW and IMF measurements to the occurrence of Pc3 activity. Multiple linear regression and artificial neural network models are used in iterative processes in order to identify sets of SW-based input parameters, which optimally reproduce a set of Pc3 activity data. The inclusion of SW density in the parameter set significantly improves the models. Not only the density itself, but other density related parameters, such as the dynamic pressure of the SW, or the standoff distance of the magnetopause work equally well in the model. The disappearance of Pc3s during low-density events can have at least four reasons according to the existing upstream wave theory: 1. Pausing the ion-cyclotron resonance that generates the upstream ultra low frequency waves in the absence of protons, 2. Weakening of the bow shock that implies less efficient reflection, 3. The SW becomes sub-Alfvénic and hence it is not able to sweep back the waves propagating upstream with the Alfvén-speed, and 4. The increase of the standoff distance of the magnetopause (and of the bow shock. Although the models cannot account for the lack of Pc3s during intervals when the SW density is extremely low, the resulting sets of optimal model inputs support the generation of mid latitude Pc3 activity predominantly through
Consolidating soil carbon turnover models by improved estimates of belowground carbon input
Taghizadeh-Toosi, Arezoo; Christensen, Bent T.; Glendining, Margaret; Olesen, Jørgen E.
2016-09-01
World soil carbon (C) stocks are third only to those in the ocean and earth crust, and represent twice the amount currently present in the atmosphere. Therefore, any small change in the amount of soil organic C (SOC) may affect carbon dioxide (CO2) concentrations in the atmosphere. Dynamic models of SOC help reveal the interaction among soil carbon systems, climate and land management, and they are also frequently used to help assess SOC dynamics. Those models often use allometric functions to calculate soil C inputs in which the amount of C in both above and below ground crop residues are assumed to be proportional to crop harvest yield. Here we argue that simulating changes in SOC stocks based on C input that are proportional to crop yield is not supported by data from long-term experiments with measured SOC changes. Rather, there is evidence that root C inputs are largely independent of crop yield, but crop specific. We discuss implications of applying fixed belowground C input regardless of crop yield on agricultural greenhouse gas mitigation and accounting.
Baker Syed; Poskar C; Junker Björn
2011-01-01
Abstract In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. Wh...
Analytical modeling of the input admittance of an electric drive for stability analysis purposes
Girinon, S.; Baumann, C.; Piquet, H.; Roux, N.
2009-07-01
Embedded electric HVDC distribution network are facing difficult issues on quality and stability concerns. In order to help to resolve those problems, this paper proposes to develop an analytical model of an electric drive. This self-contained model includes an inverter, its regulation loops and the PMSM. After comparing the model with its equivalent (abc) full model, the study focuses on frequency analysis. The association with an input filter helps in expressing stability of the whole assembly by means of Routh-Hurtwitz criterion.
New Results on Robust Model Predictive Control for Time-Delay Systems with Input Constraints
Directory of Open Access Journals (Sweden)
Qing Lu
2014-01-01
Full Text Available This paper investigates the problem of model predictive control for a class of nonlinear systems subject to state delays and input constraints. The time-varying delay is considered with both upper and lower bounds. A new model is proposed to approximate the delay. And the uncertainty is polytopic type. For the state-feedback MPC design objective, we formulate an optimization problem. Under model transformation, a new model predictive controller is designed such that the robust asymptotical stability of the closed-loop system can be guaranteed. Finally, the applicability of the presented results are demonstrated by a practical example.
Berger, Theodore W; Song, Dong; Chan, Rosa H M; Marmarelis, Vasilis Z; LaCoss, Jeff; Wills, Jack; Hampson, Robert E; Deadwyler, Sam A; Granacki, John J
2012-03-01
This paper describes the development of a cognitive prosthesis designed to restore the ability to form new long-term memories typically lost after damage to the hippocampus. The animal model used is delayed nonmatch-to-sample (DNMS) behavior in the rat, and the "core" of the prosthesis is a biomimetic multi-input/multi-output (MIMO) nonlinear model that provides the capability for predicting spatio-temporal spike train output of hippocampus (CA1) based on spatio-temporal spike train inputs recorded presynaptically to CA1 (e.g., CA3). We demonstrate the capability of the MIMO model for highly accurate predictions of CA1 coded memories that can be made on a single-trial basis and in real-time. When hippocampal CA1 function is blocked and long-term memory formation is lost, successful DNMS behavior also is abolished. However, when MIMO model predictions are used to reinstate CA1 memory-related activity by driving spatio-temporal electrical stimulation of hippocampal output to mimic the patterns of activity observed in control conditions, successful DNMS behavior is restored. We also outline the design in very-large-scale integration for a hardware implementation of a 16-input, 16-output MIMO model, along with spike sorting, amplification, and other functions necessary for a total system, when coupled together with electrode arrays to record extracellularly from populations of hippocampal neurons, that can serve as a cognitive prosthesis in behaving animals.
Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Hoffarth, Canio; Khaled, Bilal; Shyamsunder, Loukham; Rajan, Subramaniam; Blankenhorn, Gunther
2017-01-01
The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased use in the aerospace and automotive communities. The aerospace community has identified several key capabilities which are currently lacking in the available material models in commercial transient dynamic finite element codes. To attempt to improve the predictive capability of composite impact simulations, a next generation material model is being developed for incorporation within the commercial transient dynamic finite element code LS-DYNA. The material model, which incorporates plasticity, damage and failure, utilizes experimentally based tabulated input to define the evolution of plasticity and damage and the initiation of failure as opposed to specifying discrete input parameters such as modulus and strength. The plasticity portion of the orthotropic, three-dimensional, macroscopic composite constitutive model is based on an extension of the Tsai-Wu composite failure model into a generalized yield function with a non-associative flow rule. For the damage model, a strain equivalent formulation is used to allow for the uncoupling of the deformation and damage analyses. In the damage model, a semi-coupled approach is employed where the overall damage in a particular coordinate direction is assumed to be a multiplicative combination of the damage in that direction resulting from the applied loads in various coordinate directions. For the failure model, a tabulated approach is utilized in which a stress or strain based invariant is defined as a function of the location of the current stress state in stress space to define the initiation of failure. Failure surfaces can be defined with any arbitrary shape, unlike traditional failure models where the mathematical functions used to define the failure surface impose a specific shape on the failure surface. In the current
Moose models with vanishing $S$ parameter
Casalbuoni, R; Dominici, Daniele
2004-01-01
In the linear moose framework, which naturally emerges in deconstruction models, we show that there is a unique solution for the vanishing of the $S$ parameter at the lowest order in the weak interactions. We consider an effective gauge theory based on $K$ SU(2) gauge groups, $K+1$ chiral fields and electroweak groups $SU(2)_L$ and $U(1)_Y$ at the ends of the chain of the moose. $S$ vanishes when a link in the moose chain is cut. As a consequence one has to introduce a dynamical non local field connecting the two ends of the moose. Then the model acquires an additional custodial symmetry which protects this result. We examine also the possibility of a strong suppression of $S$ through an exponential behavior of the link couplings as suggested by Randall Sundrum metric.
Model parameters for simulation of physiological lipids
McGlinchey, Nicholas
2016-01-01
Coarse grain simulation of proteins in their physiological membrane environment can offer insight across timescales, but requires a comprehensive force field. Parameters are explored for multicomponent bilayers composed of unsaturated lipids DOPC and DOPE, mixed‐chain saturation POPC and POPE, and anionic lipids found in bacteria: POPG and cardiolipin. A nonbond representation obtained from multiscale force matching is adapted for these lipids and combined with an improved bonding description of cholesterol. Equilibrating the area per lipid yields robust bilayer simulations and properties for common lipid mixtures with the exception of pure DOPE, which has a known tendency to form nonlamellar phase. The models maintain consistency with an existing lipid–protein interaction model, making the force field of general utility for studying membrane proteins in physiologically representative bilayers. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26864972
Model parameter uncertainty analysis for an annual field-scale P loss model
Bolster, Carl H.; Vadas, Peter A.; Boykin, Debbie
2016-08-01
Phosphorous (P) fate and transport models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. Because all models are simplifications of complex systems, there will exist an inherent amount of uncertainty associated with their predictions. It is therefore important that efforts be directed at identifying, quantifying, and communicating the different sources of model uncertainties. In this study, we conducted an uncertainty analysis with the Annual P Loss Estimator (APLE) model. Our analysis included calculating parameter uncertainties and confidence and prediction intervals for five internal regression equations in APLE. We also estimated uncertainties of the model input variables based on values reported in the literature. We then predicted P loss for a suite of fields under different management and climatic conditions while accounting for uncertainties in the model parameters and inputs and compared the relative contributions of these two sources of uncertainty to the overall uncertainty associated with predictions of P loss. Both the overall magnitude of the prediction uncertainties and the relative contributions of the two sources of uncertainty varied depending on management practices and field characteristics. This was due to differences in the number of model input variables and the uncertainties in the regression equations associated with each P loss pathway. Inspection of the uncertainties in the five regression equations brought attention to a previously unrecognized limitation with the equation used to partition surface-applied fertilizer P between leaching and runoff losses. As a result, an alternate equation was identified that provided similar predictions with much less uncertainty. Our results demonstrate how a thorough uncertainty and model residual analysis can be used to identify limitations with a model. Such insight can then be used to guide future data collection and model
Oliveira, Roberta B; Pereira, Aledir S; Tavares, João Manuel R S
2017-10-01
The number of deaths worldwide due to melanoma has risen in recent times, in part because melanoma is the most aggressive type of skin cancer. Computational systems have been developed to assist dermatologists in early diagnosis of skin cancer, or even to monitor skin lesions. However, there still remains a challenge to improve classifiers for the diagnosis of such skin lesions. The main objective of this article is to evaluate different ensemble classification models based on input feature manipulation to diagnose skin lesions. Input feature manipulation processes are based on feature subset selections from shape properties, colour variation and texture analysis to generate diversity for the ensemble models. Three subset selection models are presented here: (1) a subset selection model based on specific feature groups, (2) a correlation-based subset selection model, and (3) a subset selection model based on feature selection algorithms. Each ensemble classification model is generated using an optimum-path forest classifier and integrated with a majority voting strategy. The proposed models were applied on a set of 1104 dermoscopic images using a cross-validation procedure. The best results were obtained by the first ensemble classification model that generates a feature subset ensemble based on specific feature groups. The skin lesion diagnosis computational system achieved 94.3% accuracy, 91.8% sensitivity and 96.7% specificity. The input feature manipulation process based on specific feature subsets generated the greatest diversity for the ensemble classification model with very promising results. Copyright © 2017 Elsevier B.V. All rights reserved.
Campolina, Daniel de A. M.; Lima, Claubia P. B.; Veloso, Maria Auxiliadora F.
2014-06-01
For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. In this work a sample space of MCNPX calculations was used to propagate the uncertainty. The sample size was optimized using the Wilks formula for a 95th percentile and a two-sided statistical tolerance interval of 95%. Uncertainties in input parameters of the reactor considered included geometry dimensions and densities. It was showed the capacity of the sampling-based method for burnup when the calculations sample size is optimized and many parameter uncertainties are investigated together, in the same input.
Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao
2017-03-01
Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.
Directory of Open Access Journals (Sweden)
Kennedy Curtis E
2011-10-01
Full Text Available Abstract Background Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. Methods We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Results Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1 selecting candidate variables; 2 specifying measurement parameters; 3 defining data format; 4 defining time window duration and resolution; 5 calculating latent variables for candidate variables not directly measured; 6 calculating time series features as latent variables; 7 creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8
Simulation Evaluation of Pilot Inputs for Real Time Modeling During Commercial Flight Operations
Martos, Borja; Ranaudo, Richard; Oltman, Ryan; Myhre, Nick
2017-01-01
Aircraft dynamics characteristics can only be identified from flight data when the aircraft dynamics are excited sufficiently. A preliminary study was conducted into what types and levels of manual piloted control excitation would be required for accurate Real-Time Parameter IDentification (RTPID) results by commercial airline pilots. This includes assessing the practicality for the pilot to provide this excitation when cued, and to further understand if pilot inputs during various phases of flight provide sufficient excitation naturally. An operationally representative task was evaluated by 5 commercial airline pilots using the NASA Ice Contamination Effects Flight Training Device (ICEFTD). Results showed that it is practical to use manual pilot inputs only as a means of achieving good RTPID in all phases of flight and in flight turbulence conditions. All pilots were effective in satisfying excitation requirements when cued. Much of the time, cueing was not even necessary, as just performing the required task provided enough excitation for accurate RTPID estimation. Pilot opinion surveys reported that the additional control inputs required when prompted by the excitation cueing were easy to make, quickly mastered, and required minimal training.
Stable isotopes and Digital Elevation Models to study nutrient inputs in high-Arctic lakes
Calizza, Edoardo; Rossi, David; Costantini, Maria Letizia; Careddu, Giulio; Rossi, Loreto
2016-04-01
Ice cover, run-off from the watershed, aquatic and terrestrial primary productivity, guano deposition from birds are key factors controlling nutrient and organic matter inputs in high-Arctic lakes. All these factors are expected to be significantly affected by climate change. Quantifying these controls is a key baseline step to understand what combination of factors subtends the biological productivity in Arctic lakes and will drive their ecological response to environmental change. Basing on Digital Elevation Models, drainage maps, and C and N elemental content and stable isotope analysis in sediments, aquatic vegetation and a dominant macroinvertebrate species (Lepidurus arcticus Pallas 1973) belonging to Tvillingvatnet, Storvatnet and Kolhamna, three lakes located in North Spitsbergen (Svalbard), we propose an integrated approach for the analysis of (i) nutrient and organic matter inputs in lakes; (ii) the role of catchment hydro-geomorphology in determining inter-lake differences in the isotopic composition of sediments; (iii) effects of diverse nutrient inputs on the isotopic niche of Lepidurus arcticus. Given its high run-off and large catchment, organic deposits in Tvillingvatnet where dominated by terrestrial inputs, whereas inputs were mainly of aquatic origin in Storvatnet, a lowland lake with low potential run-off. In Kolhamna, organic deposits seem to be dominated by inputs from birds, which actually colonise the area. Isotopic signatures were similar between samples within each lake, representing precise tracers for studies on the effect of climate change on biogeochemical cycles in lakes. The isotopic niche of L. aricticus reflected differences in sediments between lakes, suggesting a bottom-up effect of hydro-geomorphology characterizing each lake on nutrients assimilated by this species. The presented approach proven to be an effective research pathway for the identification of factors subtending to nutrient and organic matter inputs and transfer
A diffusion model for drying of a heat sensitive solid under multiple heat input modes.
Sun, Lan; Islam, Md Raisul; Ho, J C; Mujumdar, A S
2005-09-01
To obtain optimal drying kinetics as well as quality of the dried product in a batch dryer, the energy required may be supplied by combining different modes of heat transfer. In this work, using potato slice as a model heat sensitive drying object, experimental studies were conducted using a batch heat pump dryer designed to permit simultaneous application of conduction and radiation heat. Four heat input schemes were compared: pure convection, radiation-coupled convection, conduction-coupled convection and radiation-conduction-coupled convection. A two-dimensional drying model was developed assuming the drying rate to be controlled by liquid water diffusion. Both drying rates and temperatures within the slab during drying under all these four heat input schemes showed good accord with measurements. Radiation-coupled convection is the recommended heat transfer scheme from the viewpoint of high drying rate and low energy consumption.
On the redistribution of existing inputs using the spherical frontier dea model
Directory of Open Access Journals (Sweden)
José Virgilio Guedes de Avellar
2010-04-01
Full Text Available The Spherical Frontier DEA Model (SFM (Avellar et al., 2007 was developed to be used when one wants to fairly distribute a new and fixed input to a group of Decision Making Units (DMU's. SFM's basic idea is to distribute this new and fixed input in such a way that every DMU will be placed on an efficiency frontier with a spherical shape. We use SFM to analyze the problems that appear when one wants to redistribute an already existing input to a group of DMU's such that the total sum of this input will remain constant. We also analyze the case in which this total sum may vary.O Modelo de Fronteira Esférica (MFE (Avellar et al., 2007 foi desenvolvido para ser usado quando se deseja distribuir de maneira justa um novo insumo a um conjunto de unidades tomadoras de decisão (DMU's, da sigla em inglês, Decision Making Units. A ideia básica do MFE é a de distribuir esse novo insumo de maneira que todas as DMU's sejam colocadas numa fronteira de eficiência com um formato esférico. Neste artigo, usamos MFE para analisar o problema que surge quando se deseja redistribuir um insumo já existente para um grupo de DMU's de tal forma que a soma desse insumo para todas as DMU's se mantenha constante. Também analisamos o caso em que essa soma possa variar.
Institute of Scientific and Technical Information of China (English)
景绍学
2016-01-01
针对传统最小二乘算法在辨识过程中没有考虑噪声的协方差和参数的先验概率密度的问题，提出一种递推贝叶斯算法。该算法以最大化参数的后验概率密度函数为准则进行参数估计。实验结果证明所提算法可以获得更高精度的参数估计值。收敛性分析表明，该算法给出的参数估计值收敛于参数真值。该算法综合考虑了噪声方差、数据的先验概率分布和参数的先验概率分布，可以获得比最小二乘法更高的精度的估计值。%In light of that traditional least squares method does not take into account the covariance of noise and the priori probability density of parameters in the process of identification,we proposed a recursive Bayesian parameter identification algorithm.The algorithm uses the posterior probability density function of maximised parameters as the criterion to estimate parameters.Experimental result proved that the proposed algorithm could acquire the estimates of parameters in higher accuracy.Convergence analysis indicated that the estimates of parameters provide by the proposed algorithm converged to their true values.The algorithm comprehensively considers the noise variance and the priori probability distributions of data and parameters,it is able to obtain the estimates with higher accuracy than the least-squares.
DEFF Research Database (Denmark)
Nielsen, Ulrik Dam
2010-01-01
Mean outcrossing rates can be used as a basis for decision support for ships in severe sea. The article describes a procedure for calculating the mean outcrossing rate of non-Gaussian processes with stochastic input parameters. The procedure is based on the first-order reliability method (FORM......) and stochastic parameters are incorporated by carrying out a number of FORM calculations corresponding to combinations of specific values of the stochastic parameters. Subsequently, the individual FORM calculation is weighted according to the joint probability with which the specific combination of parameter....... The results of the procedure are compared with brute force simulations obtained by Monte Carlo simulation (MCS) and good agreement is observed. Importantly, the procedure requires significantly less CPU time compared to MCS to produce mean outcrossing rates....
Minimal state space realisation of continuous-time linear time-variant input-output models
Goos, J.; Pintelon, R.
2016-04-01
In the linear time-invariant (LTI) framework, the transformation from an input-output equation into state space representation is well understood. Several canonical forms exist that realise the same dynamic behaviour. If the coefficients become time-varying however, the LTI transformation no longer holds. We prove by induction that there exists a closed-form expression for the observability canonical state space model, using binomial coefficients.
2000-05-01
INTEGRATED FLIGHT MECHANIC AND AEROELASTIC MODELLING AND CONTROL OF A FLEXIBLE AIRCRAFT CONSIDERING MULTIDIMENSIONAL GUST INPUT Patrick Teufel, Martin Hanel...the lateral separation distance have been developed by ’ = matrix of two dimensional spectrum function Eichenbaum 4 and are described by Bessel...Journal of Aircraft, Vol. 30, No. 5, Sept.-Oct. 1993 Relations to Risk Sensitivity, System & Control Letters 11, [4] Eichenbaum F.D., Evaluation of 3D
The Role of Spatio-Temporal Resolution of Rainfall Inputs on a Landscape Evolution Model
Skinner, C. J.; Coulthard, T. J.
2015-12-01
Landscape Evolution Models are important experimental tools for understanding the long-term development of landscapes. Designed to simulate timescales ranging from decades to millennia, they are usually driven by precipitation inputs that are lumped, both spatially across the drainage basin, and temporally to daily, monthly, or even annual rates. This is based on an assumption that the spatial and temporal heterogeneity of the rainfall will equalise over the long timescales simulated. However, recent studies (Coulthard et al., 2012) have shown that such models are sensitive to event magnitudes, with exponential increases in sediment yields generated by linear increases in flood event size at a basin scale. This suggests that there may be a sensitivity to the spatial and temporal scales of rainfall used to drive such models. This study uses the CAESAR-Lisflood Landscape Evolution Model to investigate the impact of spatial and temporal resolution of rainfall input on model outputs. The sediment response to a range of temporal (15 min to daily) and spatial (5 km to 50km) resolutions over three different drainage basin sizes was observed. The results showed the model was sensitive to both, generating up to 100% differences in modelled sediment yields with smaller spatial and temporal resolution precipitation. Larger drainage basins also showed a greater sensitivity to both spatial and temporal resolution. Furthermore, analysis of the distribution of erosion and deposition patterns suggested that small temporal and spatial resolution inputs increased erosion in drainage basin headwaters and deposition in the valley floors. Both of these findings may have implications for existing models and approaches for simulating landscape development.
Energy Technology Data Exchange (ETDEWEB)
Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)
2014-05-01
A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.
Chen, Ding-Jiang; Sun, Si-Yang; Jia, Ying-Na; Chen, Jia-Bo; Lü, Jun
2013-01-01
Based on the hydrological difference between the point source (PS) and nonpoint source (NPS) pollution processes and the major influencing mechanism of in-stream retention processes, a bivariate statistical model was developed for relating river phosphorus load to river water flow rate and temperature. Using the calibrated and validated four model coefficients from in-stream monitoring data, monthly phosphorus input loads to the river from PS and NPS can be easily determined by the model. Compared to current hydrologica methods, this model takes the in-stream retention process and the upstream inflow term into consideration; thus it improves the knowledge on phosphorus pollution processes and can meet the requirements of both the district-based and watershed-based wate quality management patterns. Using this model, total phosphorus (TP) input load to the Changle River in Zhejiang Province was calculated. Results indicated that annual total TP input load was (54.6 +/- 11.9) t x a(-1) in 2004-2009, with upstream water inflow, PS and NPS contributing to 5% +/- 1%, 12% +/- 3% and 83% +/- 3%, respectively. The cumulative NPS TP input load during the high flow periods (i. e. , June, July, August and September) in summer accounted for 50% +/- 9% of the annual amount, increasing the alga blooming risk in downstream water bodies. Annual in-stream TP retention load was (4.5 +/- 0.1) t x a(-1) and occupied 9% +/- 2% of the total input load. The cumulative in-stream TP retention load during the summer periods (i. e. , June-September) accounted for 55% +/- 2% of the annual amount, indicating that in-stream retention function plays an important role in seasonal TP transport and transformation processes. This bivariate statistical model only requires commonly available in-stream monitoring data (i. e. , river phosphorus load, water flow rate and temperature) with no requirement of special software knowledge; thus it offers researchers an managers with a cost-effective tool for
Uncertainty Quantification for Optical Model Parameters
Lovell, A E; Sarich, J; Wild, S M
2016-01-01
Although uncertainty quantification has been making its way into nuclear theory, these methods have yet to be explored in the context of reaction theory. For example, it is well known that different parameterizations of the optical potential can result in different cross sections, but these differences have not been systematically studied and quantified. The purpose of this work is to investigate the uncertainties in nuclear reactions that result from fitting a given model to elastic-scattering data, as well as to study how these uncertainties propagate to the inelastic and transfer channels. We use statistical methods to determine a best fit and create corresponding 95\\% confidence bands. A simple model of the process is fit to elastic-scattering data and used to predict either inelastic or transfer cross sections. In this initial work, we assume that our model is correct, and the only uncertainties come from the variation of the fit parameters. We study a number of reactions involving neutron and deuteron p...
Quilty, John; Adamowski, Jan; Khalil, Bahaa; Rathinasamy, Maheswaran
2016-03-01
The input variable selection problem has recently garnered much interest in the time series modeling community, especially within water resources applications, demonstrating that information theoretic (nonlinear)-based input variable selection algorithms such as partial mutual information (PMI) selection (PMIS) provide an improved representation of the modeled process when compared to linear alternatives such as partial correlation input selection (PCIS). PMIS is a popular algorithm for water resources modeling problems considering nonlinear input variable selection; however, this method requires the specification of two nonlinear regression models, each with parametric settings that greatly influence the selected input variables. Other attempts to develop input variable selection methods using conditional mutual information (CMI) (an analog to PMI) have been formulated under different parametric pretenses such as k nearest-neighbor (KNN) statistics or kernel density estimates (KDE). In this paper, we introduce a new input variable selection method based on CMI that uses a nonparametric multivariate continuous probability estimator based on Edgeworth approximations (EA). We improve the EA method by considering the uncertainty in the input variable selection procedure by introducing a bootstrap resampling procedure that uses rank statistics to order the selected input sets; we name our proposed method bootstrap rank-ordered CMI (broCMI). We demonstrate the superior performance of broCMI when compared to CMI-based alternatives (EA, KDE, and KNN), PMIS, and PCIS input variable selection algorithms on a set of seven synthetic test problems and a real-world urban water demand (UWD) forecasting experiment in Ottawa, Canada.
Numerical modeling of partial discharges parameters
Directory of Open Access Journals (Sweden)
Kartalović Nenad M.
2016-01-01
Full Text Available In recent testing of the partial discharges or the use for the diagnosis of insulation condition of high voltage generators, transformers, cables and high voltage equipment develops rapidly. It is a result of the development of electronics, as well as, the development of knowledge about the processes of partial discharges. The aim of this paper is to contribute the better understanding of this phenomenon of partial discharges by consideration of the relevant physical processes in isolation materials and isolation systems. Prebreakdown considers specific processes, and development processes at the local level and their impact on specific isolation material. This approach to the phenomenon of partial discharges needed to allow better take into account relevant discharge parameters as well as better numerical model of partial discharges.
Including operational data in QMRA model: development and impact of model inputs.
Jaidi, Kenza; Barbeau, Benoit; Carrière, Annie; Desjardins, Raymond; Prévost, Michèle
2009-03-01
A Monte Carlo model, based on the Quantitative Microbial Risk Analysis approach (QMRA), has been developed to assess the relative risks of infection associated with the presence of Cryptosporidium and Giardia in drinking water. The impact of various approaches for modelling the initial parameters of the model on the final risk assessments is evaluated. The Monte Carlo simulations that we performed showed that the occurrence of parasites in raw water was best described by a mixed distribution: log-Normal for concentrations > detection limit (DL), and a uniform distribution for concentrations risks significantly. The mean annual risks for conventional treatment are: 1.97E-03 (removal credit adjusted by log parasite = log spores), 1.58E-05 (log parasite = 1.7 x log spores) or 9.33E-03 (regulatory credits based on the turbidity measurement in filtered water). Using full scale validated SCADA data, the simplified calculation of CT performed at the plant was shown to largely underestimate the risk relative to a more detailed CT calculation, which takes into consideration the downtime and system failure events identified at the plant (1.46E-03 vs. 3.93E-02 for the mean risk).
Baumann-Stanzer, K.; Stenzel, S.
2009-04-01
Several air dispersion models are available for prediction and simulation of the hazard areas associated with accidental releases of toxic gases. The most model packages (commercial or free of charge) include a chemical database, an intuitive graphical user interface (GUI) and automated graphical output for effective presentation of results. The models are designed especially for analyzing different accidental toxic release scenarios ("worst-case scenarios"), preparing emergency response plans and optimal countermeasures as well as for real-time risk assessment and management. Uncertainties in the meteorological input together with incorrect estimates of the source play a critical role for the model results. The research project RETOMOD (reference scenarios calculations for toxic gas releases - model systems and their utility for the fire brigade) was conducted by the Central Institute for Meteorology and Geodynamics (ZAMG) in cooperation with the Vienna fire brigade, OMV Refining & Marketing GmbH and Synex Ries & Greßlehner GmbH. RETOMOD was funded by the KIRAS safety research program at the Austrian Ministry of Transport, Innovation and Technology (www.kiras.at). The main tasks of this project were 1. Sensitivity study and optimization of the meteorological input for modeling of the hazard areas (human exposure) during the accidental toxic releases. 2. Comparison of several model packages (based on reference scenarios) in order to estimate the utility for the fire brigades. This presentation gives a short introduction to the project and presents the results of task 1 (meteorological input). The results of task 2 are presented by Stenzel and Baumann-Stanzer in this session. For the aim of this project, the observation-based analysis and forecasting system INCA, developed in the Central Institute for Meteorology and Geodynamics (ZAMG) was used. INCA (Integrated Nowcasting through Comprehensive Analysis) data were calculated with 1 km horizontal resolution and
A time-resolved model of the mesospheric Na layer: constraints on the meteor input function
Directory of Open Access Journals (Sweden)
J. M. C. Plane
2004-01-01
Full Text Available A time-resolved model of the Na layer in the mesosphere/lower thermosphere region is described, where the continuity equations for the major sodium species Na, Na+ and NaHCO3 are solved explicity, and the other short-lived species are treated in steady-state. It is shown that the diurnal variation of the Na layer can only be modelled satisfactorily if sodium species are permanently removed below about 85 km, both through the dimerization of NaHCO3 and the uptake of sodium species on meteoric smoke particles that are assumed to have formed from the recondensation of vaporized meteoroids. When the sensitivity of the Na layer to the meteoroid input function is considered, an inconsistent picture emerges. The ratio of the column abundance of Na+ to Na is shown to increase strongly with the average meteoroid velocity, because the Na is injected at higher altitudes. Comparison with a limited set of Na+ measurements indicates that the average meteoroid velocity is probably less than about 25 km s-1, in agreement with velocity estimates from conventional meteor radars, and considerably slower than recent observations made by wide aperture incoherent scatter radars. The Na column abundance is shown to be very sensitive to the meteoroid mass input rate, and to the rate of vertical transport by eddy diffusion. Although the magnitude of the eddy diffusion coefficient in the 80–90 km region is uncertain, there is a consensus between recent models using parameterisations of gravity wave momentum deposition that the average value is less than 3×105 cm2 s-1. This requires that the global meteoric mass input rate is less than about 20 td-1, which is closest to estimates from incoherent scatter radar observations. Finally, the diurnal variation in the meteoroid input rate only slight perturbs the Na layer, because the residence time of Na in the layer is several days, and diurnal effects are effectively averaged out.
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
Water yield and sediment yield in the Teba catchment, Spain, were simulated using SWRRB (Simulator for Water Resources in Rural Basins) model. The model is composed of 198 mathematical equations. About 120 items (variables) were input for the simulation, including meteorological and climatic factors, hydrologic factors, topographic factors, parent materials, soils, vegetation, human activities, etc. The simulated results involved surface runoff, subsurface runoff, sediment, peak flow, evapotranspiration, soil water, total biomass,etc. Careful and thorough input data preparation and repeated simulation experiments are the key to get the accurate results. In this work the simulation accuracy for annual water yield prediction reached to 83.68%.``
Energy Technology Data Exchange (ETDEWEB)
Rhee, Hyun-Me; Kim, Min Kyu; Choi, In-Kil [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Sheen, Dong-Hoon [Chonnam National University, Gwangju (Korea, Republic of)
2014-10-15
The tsunami hazard analysis has been based on the seismic hazard analysis. The seismic hazard analysis has been performed by using the deterministic method and the probabilistic method. To consider the uncertainties in hazard analysis, the probabilistic method has been regarded as attractive approach. The various parameters and their weight are considered by using the logic tree approach in the probabilistic method. The uncertainties of parameters should be suggested by analyzing the sensitivity because the various parameters are used in the hazard analysis. To apply the probabilistic tsunami hazard analysis, the preliminary study for the Ulchin NPP site had been performed. The information on the fault sources which was published by the Atomic Energy Society of Japan (AESJ) had been used in the preliminary study. The tsunami propagation was simulated by using the TSUNAMI{sub 1}.0 which was developed by Japan Nuclear Energy Safety Organization (JNES). The wave parameters have been estimated from the result of tsunami simulation. In this study, the sensitivity analysis for the fault sources which were selected in the previous studies has been performed. To analyze the effect of the parameters, the sensitivity analysis for the E3 fault source which was published by AESJ was performed. The effect of the recurrence interval, the potential maximum magnitude, and the beta were suggested by the sensitivity analysis results. Level of annual exceedance probability has been affected by the recurrence interval.. Wave heights have been influenced by the potential maximum magnitude and the beta. In the future, the sensitivity analysis for the all fault sources in the western part of Japan which were published AESJ would be performed.
Unitary input DEA model to identify beef cattle production systems typologies
Directory of Open Access Journals (Sweden)
Eliane Gonçalves Gomes
2012-08-01
Full Text Available The cow-calf beef production sector in Brazil has a wide variety of operating systems. This suggests the identification and the characterization of homogeneous regions of production, with consequent implementation of actions to achieve its sustainability. In this paper we attempted to measure the performance of 21 livestock modal production systems, in their cow-calf phase. We measured the performance of these systems, considering husbandry and production variables. The proposed approach is based on data envelopment analysis (DEA. We used unitary input DEA model, with apparent input orientation, together with the efficiency measurements generated by the inverted DEA frontier. We identified five modal production systems typologies, using the isoefficiency layers approach. The results showed that the knowledge and the processes management are the most important factors for improving the efficiency of beef cattle production systems.
Alternative to Ritt's pseudodivision for finding the input-output equations of multi-output models.
Meshkat, Nicolette; Anderson, Chris; DiStefano, Joseph J
2012-09-01
Differential algebra approaches to structural identifiability analysis of a dynamic system model in many instances heavily depend upon Ritt's pseudodivision at an early step in analysis. The pseudodivision algorithm is used to find the characteristic set, of which a subset, the input-output equations, is used for identifiability analysis. A simpler algorithm is proposed for this step, using Gröbner Bases, along with a proof of the method that includes a reduced upper bound on derivative requirements. Efficacy of the new algorithm is illustrated with several biosystem model examples.
Directory of Open Access Journals (Sweden)
Bing Liu
2013-01-01
Full Text Available By using pollution model and impulsive delay differential equation, we formulate a pest control model with stage structure for natural enemy in a polluted environment by introducing a constant periodic pollutant input and killing pest at different fixed moments and investigate the dynamics of such a system. We assume only that the natural enemies are affected by pollution, and we choose the method to kill the pest without harming natural enemies. Sufficient conditions for global attractivity of the natural enemy-extinction periodic solution and permanence of the system are obtained. Numerical simulations are presented to confirm our theoretical results.
Kumar, Y Satish; Talarico, Claudio; Wang, Janet; 10.1109/DATE.2005.31
2011-01-01
Since the advent of new nanotechnologies, the variability of gate delay due to process variations has become a major concern. This paper proposes a new gate delay model that includes impact from both process variations and multiple input switching. The proposed model uses orthogonal polynomial based probabilistic collocation method to construct a delay analytical equation from circuit timing performance. From the experimental results, our approach has less that 0.2% error on the mean delay of gates and less than 3% error on the standard deviation.
Realistic modeling of seismic input for megacities and large urban areas
Panza, G. F.; Unesco/Iugs/Igcp Project 414 Team
2003-04-01
The project addressed the problem of pre-disaster orientation: hazard prediction, risk assessment, and hazard mapping, in connection with seismic activity and man-induced vibrations. The definition of realistic seismic input has been obtained from the computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different source and structural models. The innovative modeling technique, that constitutes the common tool to the entire project, takes into account source, propagation and local site effects. This is done using first principles of physics about wave generation and propagation in complex media, and does not require to resort to convolutive approaches, that have been proven to be quite unreliable, mainly when dealing with complex geological structures, the most interesting from the practical point of view. In fact, several techniques that have been proposed to empirically estimate the site effects using observations convolved with theoretically computed signals corresponding to simplified models, supply reliable information about the site response to non-interfering seismic phases. They are not adequate in most of the real cases, when the seismic sequel is formed by several interfering waves. The availability of realistic numerical simulations enables us to reliably estimate the amplification effects even in complex geological structures, exploiting the available geotechnical, lithological, geophysical parameters, topography of the medium, tectonic, historical, palaeoseismological data, and seismotectonic models. The realistic modeling of the ground motion is a very important base of knowledge for the preparation of groundshaking scenarios that represent a valid and economic tool for the seismic microzonation. This knowledge can be very fruitfully used by civil engineers in the design of new seismo-resistant constructions and in the reinforcement of the existing built environment, and, therefore
Tamagnone, Michele
2014-01-01
An analytical circuit model able to predict the input impedance of reconfigurable graphene plasmonic dipoles is presented. A suitable definition of plasmonic characteristic impedance, employing natural currents, is used to for consistent modeling of the antenna-load connection in the circuit. In its purely analytical form, the model shows good agreement with full-wave simulations, and explains the remarkable tuning properties of graphene antennas. Furthermore, using a single full-wave simulation and scaling laws, additional parasitic elements can be determined for a vast parametric space, leading to very accurate modeling. Finally, we also show that the modeling approach allows fair estimation of radiation efficiency as well. The approach also applies to thin plasmonic antennas realized using noble metals or semiconductors.
Calibration of back-analysed model parameters for landslides using classification statistics
Cepeda, Jose; Henderson, Laura
2016-04-01
Back-analyses are useful for characterizing the geomorphological and mechanical processes and parameters involved in the initiation and propagation of landslides. These processes and parameters can in turn be used for improving forecasts of scenarios and hazard assessments in areas or sites which have similar settings to the back-analysed cases. The selection of the modeled landslide that produces the best agreement with the actual observations requires running a number of simulations by varying the type of model and the sets of input parameters. The comparison of the simulated and observed parameters is normally performed by visual comparison of geomorphological or dynamic variables (e.g., geometry of scarp and final deposit, maximum velocities and depths). Over the past six years, a method developed by NGI has been used by some researchers for a more objective selection of back-analysed input model parameters. That method includes an adaptation of the equations for calculation of classifiers, and a comparative evaluation of classifiers of the selected parameter sets in the Receiver Operating Characteristic (ROC) space. This contribution presents an updating of the methodology. The proposed procedure allows comparisons between two or more "clouds" of classifiers. Each cloud represents the performance of a model over a range of input parameters (e.g., samples of probability distributions). Considering the fact that each cloud does not necessarily produce a full ROC curve, two new normalised ROC-space parameters are introduced for characterizing the performance of each cloud. The first parameter is representative of the cloud position relative to the point of perfect classification. The second parameter characterizes the position of the cloud relative to the theoretically perfect ROC curve and the no-discrimination line. The methodology is illustrated with back-analyses of slope stability and landslide runout of selected case studies. This research activity has been
Stegemann, Sven; Connolly, Paul; Matthews, Wayne; Barnett, Rodger; Aylott, Mike; Schrooten, Karin; Cadé, Dominique; Taylor, Anthony; Bresciani, Massimo
2014-06-01
Understanding the product and process variable on the final product performance is an essential part of the quality-by-design (QbD) principles in pharmaceutical development. The hard capsule is an established pharmaceutical dosage form used worldwide in development and manufacturing. The empty hard capsules are supplied as an excipient that is filled by pharmaceutical manufacturers with a variety of different formulations and products. To understand the potential variations of the empty hard capsules as an input parameter and its potential impact on the finished product quality, a study was performed investigating the critical quality parameters within and in between different batches of empty hard gelatin capsules. The variability of the hard capsules showed high consistency within the specification of the critical quality parameters. This also accounts for the disintegration times, when automatic endpoint detection was used. Based on these data, hard capsules can be considered as a suitable excipient for product development using QbD principles.
Parameter Optimisation for the Behaviour of Elastic Models over Time
DEFF Research Database (Denmark)
Mosegaard, Jesper
2004-01-01
Optimisation of parameters for elastic models is essential for comparison or finding equivalent behaviour of elastic models when parameters cannot simply be transferred or converted. This is the case with a large range of commonly used elastic models. In this paper we present a general method...... that will optimise parameters based on the behaviour of the elastic models over time....
Model Identification of Linear Parameter Varying Aircraft Systems
Fujimore, Atsushi; Ljung, Lennart
2007-01-01
This article presents a parameter estimation of continuous-time polytopic models for a linear parameter varying (LPV) system. The prediction error method of linear time invariant (LTI) models is modified for polytopic models. The modified prediction error method is applied to an LPV aircraft system whose varying parameter is the flight velocity and model parameters are the stability and control derivatives (SCDs). In an identification simulation, the polytopic model is more suitable for expre...
Directory of Open Access Journals (Sweden)
Diego Iribarren
2013-07-01
Full Text Available Economic, social and environmental dimensions are usually accepted as the three pillars of sustainable development. However, current methodologies for the assessment of the sustainability of product systems fail to cover economic, environmental and social parameters in a single combined approach. Even though the perfect methodology is still far off, this article attempts to provide insights on the potentials of the five-step LCA + DEA method, based on both Life Cycle Assessment (LCA and Data Envelopment Analysis (DEA methodologies, to cope with operational (economic, environmental and social parameters when evaluating multiple similar entities. The LCA + DEA methodology has already been proven to be a suitable approach for the evaluation of a homogenous set of units from an operational and environmental perspective, while allowing the consideration of economic aspects. However, this is the first study focused on the implementation of social parameters in LCA + DEA studies. The suitability of labor as an additional DEA item is evaluated to validate this integrative LCA + DEA concept. Illustrative case studies are used to show the advantages and drawbacks associated with the use of labor in terms of number of workers and number of working hours. In light of the results, the integrative LCA + DEA concept is seen as an all-in-one methodology, which is easy to implement, even though relevant limitations should be discussed in order to guarantee an appropriate interpretation of the social results derived from the proposed method.
Feister, U.; Junk, J.; Woldt, M.; Bais, A.; Helbig, A.; Janouch, M.; Josefsson, W.; Kazantzidis, A.; Lindfors, A.; den Outer, P. N.; Slaper, H.
2008-06-01
Artificial Neural Networks (ANN) are efficient tools to derive solar UV radiation from measured meteorological parameters such as global radiation, aerosol optical depths and atmospheric column ozone. The ANN model has been tested with different combinations of data from the two sites Potsdam and Lindenberg, and used to reconstruct solar UV radiation at eight European sites by more than 100 years into the past. Special emphasis will be given to the discussion of small-scale characteristics of input data to the ANN model. Annual totals of UV radiation derived from reconstructed daily UV values reflect interannual variations and long-term patterns that are compatible with variabilities and changes of measured input data, in particular global dimming by about 1980/1990, subsequent global brightening, volcanic eruption effects such as that of Mt. Pinatubo, and the long-term ozone decline since the 1970s. Patterns of annual erythemal UV radiation are very similar at sites located at latitudes close to each other, but different patterns occur between UV radiation at sites in different latitude regions.
Directory of Open Access Journals (Sweden)
U. Feister
2008-06-01
Full Text Available Artificial Neural Networks (ANN are efficient tools to derive solar UV radiation from measured meteorological parameters such as global radiation, aerosol optical depths and atmospheric column ozone. The ANN model has been tested with different combinations of data from the two sites Potsdam and Lindenberg, and used to reconstruct solar UV radiation at eight European sites by more than 100 years into the past. Special emphasis will be given to the discussion of small-scale characteristics of input data to the ANN model.
Annual totals of UV radiation derived from reconstructed daily UV values reflect interannual variations and long-term patterns that are compatible with variabilities and changes of measured input data, in particular global dimming by about 1980/1990, subsequent global brightening, volcanic eruption effects such as that of Mt. Pinatubo, and the long-term ozone decline since the 1970s. Patterns of annual erythemal UV radiation are very similar at sites located at latitudes close to each other, but different patterns occur between UV radiation at sites in different latitude regions.
Definition of Saturn's magnetospheric model parameters for the Pioneer 11 flyby
Directory of Open Access Journals (Sweden)
E. S. Belenkaya
2006-05-01
Full Text Available This paper presents a description of a method for selection parameters for a global paraboloid model of Saturn's magnetosphere. The model is based on the preexisting paraboloid terrestrial and Jovian models of the magnetospheric field. Interaction of the solar wind with the magnetosphere, i.e. the magnetotail current system, and the magnetopause currents screening all magnetospheric field sources, is taken into account. The input model parameters are determined from observations of the Pioneer 11 inbound flyby.
Solving Inverse Problems for Mechanistic Systems Biology Models with Unknown Inputs
2014-10-16
frusemide in terms of diuresis and natriuresis can be modeled by indirect response model [18]. In this project, a modified version of this model was used...were derived from their measurements. The model relating the effect site excretion rate of frusemide ( ) to diuresis is given by: 64433-MA-II...time courses of frusemide infusion rate, frusemide urinary excretion rate, diuresis and natriuresis). The “true” parameter values used in the
Schneider, Robert; Haberl, Alexander; Rascher, Rolf
2017-06-01
The trend in the optic industry shows, that it is increasingly important to be able to manufacture complex lens geometries on a high level of precision. From a certain limit on the required shape accuracy of optical workpieces, the processing is changed from the two-dimensional to point-shaped processing. It is very important that the process is as stable as possible during the in point-shaped processing. To ensure stability, usually only one process parameter is varied during processing. It is common that this parameter is the feed rate, which corresponds to the dwell time. In the research project ArenA-FOi (Application-oriented analysis of resource-saving and energy-efficient design of industrial facilities for the optical industry), a touching procedure is used in the point-attack, and in this case a close look is made as to whether a change of several process parameters is meaningful during a processing. The ADAPT tool in size R20 from Satisloh AG is used, which is also available for purchase. The behavior of the tool is tested under constant conditions in the MCP 250 CNC by OptoTech GmbH. A series of experiments should enable the TIF (tool influence function) to be determined using three variable parameters. Furthermore, the maximum error frequency that can be processed is calculated as an example for one parameter set and serves as an outlook for further investigations. The test results serve as the basic for the later removal simulation, which must be able to deal with a variable TIF. This topic has already been successfully implemented in another research project of the Institute for Precision Manufacturing and High-Frequency Technology (IPH) and thus this algorithm can be used. The next step is the useful implementation of the collected knowledge. The TIF must be selected on the basis of the measured data. It is important to know the error frequencies to select the optimal TIF. Thus, it is possible to compare the simulated results with real measurement
A novel criterion for determination of material model parameters
Andrade-Campos, A.; de-Carvalho, R.; Valente, R. A. F.
2011-05-01
Parameter identification problems have emerged due to the increasing demanding of precision in the numerical results obtained by Finite Element Method (FEM) software. High result precision can only be obtained with confident input data and robust numerical techniques. The determination of parameters should always be performed confronting numerical and experimental results leading to the minimum difference between them. However, the success of this task is dependent of the specification of the cost/objective function, defined as the difference between the experimental and the numerical results. Recently, various objective functions have been formulated to assess the errors between the experimental and computed data (Lin et al., 2002; Cao and Lin, 2008; among others). The objective functions should be able to efficiently lead the optimisation process. An ideal objective function should have the following properties: (i) all the experimental data points on the curve and all experimental curves should have equal opportunity to be optimised; and (ii) different units and/or the number of curves in each sub-objective should not affect the overall performance of the fitting. These two criteria should be achieved without manually choosing the weighting factors. However, for some non-analytical specific problems, this is very difficult in practice. Null values of experimental or numerical values also turns the task difficult. In this work, a novel objective function for constitutive model parameter identification is presented. It is a generalization of the work of Cao and Lin and it is suitable for all kinds of constitutive models and mechanical tests, including cyclic tests and Baushinger tests with null values.
Santosa, H.; Hobara, Y.; Balikhin, M. A.
2015-12-01
Very Low Frequency (VLF) waves have been proposed as an approach to study and monitor the lower ionospheric conditions. The ionospheric perturbations are identified in relation with thunderstorm activity, geomagnetic storm and other factors. The temporal dependence of VLF amplitude has a complicated and large daily variabilities in general due to combinations of both effects from above (space weather effect) and below (atmospheric and crustal processes) of the ionosphere. Quantitative contributions from different external sources are not known well yet. Thus the modelling and prediction of VLF wave amplitude are important issues to study the lower ionospheric responses from various external parameters and to also detect the anomalies of the ionosphere. The purpose of the study is to model and predict nighttime average amplitude of VLF wave propagation from the VLF transmitter in Hawaii (NPM) to receiver in Chofu (CHO) Tokyo, Japan path using NARX neural network. The constructed model was trained for the target parameter of nighttime average amplitude of NPM-CHO path. The NARX model, which was built based on daily input variables of various physical parameters such as stratosphere temperature, cosmic rays and total column ozone, possessed good accuracies. As a result, the constructed models are capable of performing accurate multistep ahead predictions, while maintaining acceptable one step ahead prediction accuracy. The results of the predicted daily VLF amplitude are in good agreement with observed (true) value for one step ahead prediction (r = 0.92, RMSE = 1.99), multi-step ahead 5 days prediction (r = 0.91, RMSE = 1.14) and multi-step ahead 10 days prediction (r = 0.75, RMSE = 1.74). The developed model indicates the feasibility and reliability of predicting lower ionospheric properties by the NARX neural network approach, and provides physical insights on the responses of lower ionosphere due to various external forcing.
Directory of Open Access Journals (Sweden)
Christian Vögeli
2016-12-01
Full Text Available Accurate knowledge on snow distribution in alpine terrain is crucial for various applicationssuch as flood risk assessment, avalanche warning or managing water supply and hydro-power.To simulate the seasonal snow cover development in alpine terrain, the spatially distributed,physics-based model Alpine3D is suitable. The model is typically driven by spatial interpolationsof observations from automatic weather stations (AWS, leading to errors in the spatial distributionof atmospheric forcing. With recent advances in remote sensing techniques, maps of snowdepth can be acquired with high spatial resolution and accuracy. In this work, maps of the snowdepth distribution, calculated from summer and winter digital surface models based on AirborneDigital Sensors (ADS, are used to scale precipitation input data, with the aim to improve theaccuracy of simulation of the spatial distribution of snow with Alpine3D. A simple method toscale and redistribute precipitation is presented and the performance is analysed. The scalingmethod is only applied if it is snowing. For rainfall the precipitation is distributed by interpolation,with a simple air temperature threshold used for the determination of the precipitation phase.It was found that the accuracy of spatial snow distribution could be improved significantly forthe simulated domain. The standard deviation of absolute snow depth error is reduced up toa factor 3.4 to less than 20 cm. The mean absolute error in snow distribution was reducedwhen using representative input sources for the simulation domain. For inter-annual scaling, themodel performance could also be improved, even when using a remote sensing dataset from adifferent winter. In conclusion, using remote sensing data to process precipitation input, complexprocesses such as preferential snow deposition and snow relocation due to wind or avalanches,can be substituted and modelling performance of spatial snow distribution is improved.
Strokal, Maryna; Kroeze, Carolien; Wang, Mengru; Bai, Zhaohai; Ma, Lin
2016-08-15
Chinese agriculture has been developing fast towards industrial food production systems that discharge nutrient-rich wastewater into rivers. As a result, nutrient export by rivers has been increasing, resulting in coastal water pollution. We developed a Model to Assess River Inputs of Nutrients to seAs (MARINA) for China. The MARINA Nutrient Model quantifies river export of nutrients by source at the sub-basin scale as a function of human activities on land. MARINA is a downscaled version for China of the Global NEWS-2 (Nutrient Export from WaterSheds) model with an improved approach for nutrient losses from animal production and population. We use the model to quantify dissolved inorganic and organic nitrogen (N) and phosphorus (P) export by six large rivers draining into the Bohai Gulf (Yellow, Hai, Liao), Yellow Sea (Yangtze, Huai) and South China Sea (Pearl) in 1970, 2000 and 2050. We addressed uncertainties in the MARINA Nutrient model. Between 1970 and 2000 river export of dissolved N and P increased by a factor of 2-8 depending on sea and nutrient form. Thus, the risk for coastal eutrophication increased. Direct losses of manure to rivers contribute to 60-78% of nutrient inputs to the Bohai Gulf and 20-74% of nutrient inputs to the other seas in 2000. Sewage is an important source of dissolved inorganic P, and synthetic fertilizers of dissolved inorganic N. Over half of the nutrients exported by the Yangtze and Pearl rivers originated from human activities in downstream and middlestream sub-basins. The Yellow River exported up to 70% of dissolved inorganic N and P from downstream sub-basins and of dissolved organic N and P from middlestream sub-basins. Rivers draining into the Bohai Gulf are drier, and thus transport fewer nutrients. For the future we calculate further increases in river export of nutrients. The MARINA Nutrient model quantifies the main sources of coastal water pollution for sub-basins. This information can contribute to formulation of
Robertson, Dale M.; Saad, David A.
2011-01-01
Nutrient input to the Laurentian Great Lakes continues to cause problems with eutrophication. To reduce the extent and severity of these problems, target nutrient loads were established and Total Maximum Daily Loads are being developed for many tributaries. Without detailed loading information it is difficult to determine if the targets are being met and how to prioritize rehabilitation efforts. To help address these issues, SPAtially Referenced Regressions On Watershed attributes (SPARROW) models were developed for estimating loads and sources of phosphorus (P) and nitrogen (N) from the United States (U.S.) portion of the Great Lakes, Upper Mississippi, Ohio, and Red River Basins. Results indicated that recent U.S. loadings to Lakes Michigan and Ontario are similar to those in the 1980s, whereas loadings to Lakes Superior, Huron, and Erie decreased. Highest loads were from tributaries with the largest watersheds, whereas highest yields were from areas with intense agriculture and large point sources of nutrients. Tributaries were ranked based on their relative loads and yields to each lake. Input from agricultural areas was a significant source of nutrients, contributing ∼33-44% of the P and ∼33-58% of the N, except for areas around Superior with little agriculture. Point sources were also significant, contributing ∼14-44% of the P and 13-34% of the N. Watersheds around Lake Erie contributed nutrients at the highest rate (similar to intensively farmed areas in the Midwest) because they have the largest nutrient inputs and highest delivery ratio.
Directory of Open Access Journals (Sweden)
Koichi Kobayashi
2016-01-01
Full Text Available A networked control system (NCS is a control system where components such as plants and controllers are connected through communication networks. Self-triggered control is well known as one of the control methods in NCSs and is a control method that for sampled-data control systems both the control input and the aperiodic sampling interval (i.e., the transmission interval are computed simultaneously. In this paper, a self-triggered model predictive control (MPC method for discrete-time linear systems with disturbances is proposed. In the conventional MPC method, the first one of the control input sequence obtained by solving the finite-time optimal control problem is sent and applied to the plant. In the proposed method, the first some elements of the control input sequence obtained are sent to the plant, and each element is sequentially applied to the plant. The number of elements is decided according to the effect of disturbances. In other words, transmission intervals can be controlled. Finally, the effectiveness of the proposed method is shown by numerical simulations.
Input-output modeling for urban energy consumption in Beijing: dynamics and comparison.
Zhang, Lixiao; Hu, Qiuhong; Zhang, Fan
2014-01-01
Input-output analysis has been proven to be a powerful instrument for estimating embodied (direct plus indirect) energy usage through economic sectors. Using 9 economic input-output tables of years 1987, 1990, 1992, 1995, 1997, 2000, 2002, 2005, and 2007, this paper analyzes energy flows for the entire city of Beijing and its 30 economic sectors, respectively. Results show that the embodied energy consumption of Beijing increased from 38.85 million tonnes of coal equivalent (Mtce) to 206.2 Mtce over the past twenty years of rapid urbanization; the share of indirect energy consumption in total energy consumption increased from 48% to 76%, suggesting the transition of Beijing from a production-based and manufacturing-dominated economy to a consumption-based and service-dominated economy. Real estate development has shown to be a major driving factor of the growth in indirect energy consumption. The boom and bust of construction activities have been strongly correlated with the increase and decrease of system-side indirect energy consumption. Traditional heavy industries remain the most energy-intensive sectors in the economy. However, the transportation and service sectors have contributed most to the rapid increase in overall energy consumption. The analyses in this paper demonstrate that a system-wide approach such as that based on input-output model can be a useful tool for robust energy policy making.
PERMODELAN INDEKS HARGA KONSUMEN INDONESIA DENGAN MENGGUNAKAN MODEL INTERVENSI MULTI INPUT
Novianti, P.W.
2017-01-24
There are some events which are expected effecting CPI’s fluctuation, i.e. financial crisis 1997/1998, fuel price risings, base year changing’s, independence of Timor-Timur (October 1999), and Tsunami disaster in Aceh (December 2004). During re-search period, there were eight fuel price risings and four base year changing’s. The objective of this research is to obtain multi input intervention model which can des-cribe magnitude and duration of each event effected to CPI. Most of intervention re-searches that have been done are only contain of an intervention with single input, ei-ther step or pulse function. Multi input intervention was used in Indonesia CPI case because there are some events which are expected effecting CPI. Based on the result, those events were affecting CPI. Additionally, other events, such as Ied on January 1999, events on April 2002, July 2003, December 2005, and September 2008, were affecting CPI too. In general, those events gave positive effect to CPI, except events on April 2002 and July 2003 which gave negative effects.
Input-output modeling for urban energy consumption in Beijing: dynamics and comparison.
Directory of Open Access Journals (Sweden)
Lixiao Zhang
Full Text Available Input-output analysis has been proven to be a powerful instrument for estimating embodied (direct plus indirect energy usage through economic sectors. Using 9 economic input-output tables of years 1987, 1990, 1992, 1995, 1997, 2000, 2002, 2005, and 2007, this paper analyzes energy flows for the entire city of Beijing and its 30 economic sectors, respectively. Results show that the embodied energy consumption of Beijing increased from 38.85 million tonnes of coal equivalent (Mtce to 206.2 Mtce over the past twenty years of rapid urbanization; the share of indirect energy consumption in total energy consumption increased from 48% to 76%, suggesting the transition of Beijing from a production-based and manufacturing-dominated economy to a consumption-based and service-dominated economy. Real estate development has shown to be a major driving factor of the growth in indirect energy consumption. The boom and bust of construction activities have been strongly correlated with the increase and decrease of system-side indirect energy consumption. Traditional heavy industries remain the most energy-intensive sectors in the economy. However, the transportation and service sectors have contributed most to the rapid increase in overall energy consumption. The analyses in this paper demonstrate that a system-wide approach such as that based on input-output model can be a useful tool for robust energy policy making.
Investigation of effects of varying model inputs on mercury deposition estimates in the Southwest US
Directory of Open Access Journals (Sweden)
T. Myers
2012-04-01
Full Text Available The Community Multiscale Air Quality (CMAQ model version 4.7.1 was used to simulate mercury wet and dry deposition for a domain covering the contiguous United States (US. The simulations used MM5-derived meteorological input fields and the US Environmental Protection Agency (EPA Clear Air Mercury Rule (CAMR emissions inventory. Using sensitivity simulations with different boundary conditions and tracer simulations, this investigation focuses on the contributions of boundary concentrations to deposited mercury in the Southwest (SW US. Concentrations of oxidized mercury species along the boundaries of the domain, in particular the upper layers of the domain, can make significant contributions to the simulated wet and dry deposition of mercury in the SW US. In order to better understand the contributions of boundary conditions to deposition, inert tracer simulations were conducted to quantify the relative amount of an atmospheric constituent transported across the boundaries of the domain at various altitudes and to quantify the amount that reaches and potentially deposits to the land surface in the SW US. Simulations using alternate sets of boundary concentrations, including estimates from global models (Goddard Earth Observing System-Chem (GEOS-Chem and the Global/Regional Atmospheric Heavy Metals (GRAHM model, and alternate meteorological input fields (for different years are analyzed in this paper. CMAQ dry deposition in the SW US is sensitive to differences in the atmospheric dynamics and atmospheric mercury chemistry parameterizations between the global models used for boundary conditions.
A synaptic input portal for a mapped clock oscillator model of neuronal electrical rhythmic activity
Zariffa, José; Ebden, Mark; Bardakjian, Berj L.
2004-09-01
Neuronal electrical oscillations play a central role in a variety of situations, such as epilepsy and learning. The mapped clock oscillator (MCO) model is a general model of transmembrane voltage oscillations in excitable cells. In order to be able to investigate the behaviour of neuronal oscillator populations, we present a neuronal version of the model. The neuronal MCO includes an extra input portal, the synaptic portal, which can reflect the biological relationships in a chemical synapse between the frequency of the presynaptic action potentials and the postsynaptic resting level, which in turn affects the frequency of the postsynaptic potentials. We propose that the synaptic input-output relationship must include a power function in order to be able to reproduce physiological behaviour such as resting level saturation. One linear and two power functions (Butterworth and sigmoidal) are investigated, using the case of an inhibitory synapse. The linear relation was not able to produce physiologically plausible behaviour, whereas both the power function examples were appropriate. The resulting neuronal MCO model can be tailored to a variety of neuronal cell types, and can be used to investigate complex population behaviour, such as the influence of network topology and stochastic resonance.
Investigation of effects of varying model inputs on mercury deposition estimates in the Southwest US
Directory of Open Access Journals (Sweden)
T. Myers
2013-01-01
Full Text Available The Community Multiscale Air Quality (CMAQ model version 4.7.1 was used to simulate mercury wet and dry deposition for a domain covering the continental United States (US. The simulations used MM5-derived meteorological input fields and the US Environmental Protection Agency (EPA Clear Air Mercury Rule (CAMR emissions inventory. Using sensitivity simulations with different boundary conditions and tracer simulations, this investigation focuses on the contributions of boundary concentrations to deposited mercury in the Southwest (SW US. Concentrations of oxidized mercury species along the boundaries of the domain, in particular the upper layers of the domain, can make significant contributions to the simulated wet and dry deposition of mercury in the SW US. In order to better understand the contributions of boundary conditions to deposition, inert tracer simulations were conducted to quantify the relative amount of an atmospheric constituent transported across the boundaries of the domain at various altitudes and to quantify the amount that reaches and potentially deposits to the land surface in the SW US. Simulations using alternate sets of boundary concentrations, including estimates from global models (Goddard Earth Observing System-Chem (GEOS-Chem and the Global/Regional Atmospheric Heavy Metals (GRAHM model, and alternate meteorological input fields (for different years are analyzed in this paper. CMAQ dry deposition in the SW US is sensitive to differences in the atmospheric dynamics and atmospheric mercury chemistry parameterizations between the global models used for boundary conditions.
An improved state-parameter analysis of ecosystem models using data assimilation
Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.
2008-01-01
Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the
[Calculation of parameters in forest evapotranspiration model].
Wang, Anzhi; Pei, Tiefan
2003-12-01
Forest evapotranspiration is an important component not only in water balance, but also in energy balance. It is a great demand for the development of forest hydrology and forest meteorology to simulate the forest evapotranspiration accurately, which is also a theoretical basis for the management and utilization of water resources and forest ecosystem. Taking the broadleaved Korean pine forest on Changbai Mountain as an example, this paper constructed a mechanism model for estimating forest evapotranspiration, based on the aerodynamic principle and energy balance equation. Using the data measured by the Routine Meteorological Measurement System and Open-Path Eddy Covariance Measurement System mounted on the tower in the broadleaved Korean pine forest, the parameters displacement height d, stability functions for momentum phi m, and stability functions for heat phi h were ascertained. The displacement height of the study site was equal to 17.8 m, near to the mean canopy height, and the functions of phi m and phi h changing with gradient Richarson number R i were constructed.
Realistic modelling of the seismic input Site effects and parametric studies
Romanelli, F; Vaccari, F
2002-01-01
We illustrate the work done in the framework of a large international cooperation, showing the very recent numerical experiments carried out within the framework of the EC project 'Advanced methods for assessing the seismic vulnerability of existing motorway bridges' (VAB) to assess the importance of non-synchronous seismic excitation of long structures. The definition of the seismic input at the Warth bridge site, i.e. the determination of the seismic ground motion due to an earthquake with a given magnitude and epicentral distance from the site, has been done following a theoretical approach. In order to perform an accurate and realistic estimate of site effects and of differential motion it is necessary to make a parametric study that takes into account the complex combination of the source and propagation parameters, in realistic geological structures. The computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different sources and stru...
LMI-Based Fuzzy Optimal Variance Control of Airfoil Model Subject to Input Constraints
Swei, Sean S.M.; Ayoubi, Mohammad A.
2017-01-01
This paper presents a study of fuzzy optimal variance control problem for dynamical systems subject to actuator amplitude and rate constraints. Using Takagi-Sugeno fuzzy modeling and dynamic Parallel Distributed Compensation technique, the stability and the constraints can be cast as a multi-objective optimization problem in the form of Linear Matrix Inequalities. By utilizing the formulations and solutions for the input and output variance constraint problems, we develop a fuzzy full-state feedback controller. The stability and performance of the proposed controller is demonstrated through its application to the airfoil flutter suppression.
A leech model for homeostatic plasticity and motor network recovery after loss of descending inputs.
Lane, Brian J
2016-04-01
Motor networks below the site of spinal cord injury (SCI) and their reconfiguration after loss of central inputs are poorly understood but remain of great interest in SCI research. Harley et al. (J Neurophysiol 113: 3610-3622, 2015) report a striking locomotor recovery paradigm in the leech Hirudo verbena with features that are functionally analogous to SCI. They propose that this well-established neurophysiological system could potentially be repurposed to provide a complementary model to investigate basic principles of homeostatic compensation relevant to SCI research.
Arsad, Roslah; Nasir Abdullah, Mohammad; Alias, Suriana; Isa, Zaidi
2017-09-01
Stock evaluation has always been an interesting problem for investors. In this paper, a comparison regarding the efficiency stocks of listed companies in Bursa Malaysia were made through the application of estimation method of Data Envelopment Analysis (DEA). One of the interesting research subjects in DEA is the selection of appropriate input and output parameter. In this study, DEA was used to measure efficiency of stocks of listed companies in Bursa Malaysia in terms of the financial ratio to evaluate performance of stocks. Based on previous studies and Fuzzy Delphi Method (FDM), the most important financial ratio was selected. The results indicated that return on equity, return on assets, net profit margin, operating profit margin, earnings per share, price to earnings and debt to equity were the most important ratios. Using expert information, all the parameter were clarified as inputs and outputs. The main objectives were to identify most critical financial ratio, clarify them based on expert information and compute the relative efficiency scores of stocks as well as rank them in the construction industry and material completely. The methods of analysis using Alirezaee and Afsharian’s model were employed in this study, where the originality of Charnes, Cooper and Rhodes (CCR) with the assumption of Constant Return to Scale (CSR) still holds. This method of ranking relative efficiency of decision making units (DMUs) was value-added by the Balance Index. The interested data was made for year 2015 and the population of the research includes accepted companies in stock markets in the construction industry and material (63 companies). According to the ranking, the proposed model can rank completely for 63 companies using selected financial ratio.
Model Predictive Control of Linear Systems over Networks with State and Input Quantizations
Directory of Open Access Journals (Sweden)
Xiao-Ming Tang
2013-01-01
Full Text Available Although there have been a lot of works about the synthesis and analysis of networked control systems (NCSs with data quantization, most of the results are developed for the case of considering the quantizer only existing in one of the transmission links (either from the sensor to the controller link or from the controller to the actuator link. This paper investigates the synthesis approaches of model predictive control (MPC for NCS subject to data quantizations in both links. Firstly, a novel model to describe the state and input quantizations of the NCS is addressed by extending the sector bound approach. Further, from the new model, two synthesis approaches of MPC are developed: one parameterizes the infinite horizon control moves into a single state feedback law and the other into a free control move followed by the single state feedback law. Finally, the stability results that explicitly consider the satisfaction of input and state constraints are presented. A numerical example is given to illustrate the effectiveness of the proposed MPC.
Transfer function modeling of damping mechanisms in distributed parameter models
Slater, J. C.; Inman, D. J.
1994-01-01
This work formulates a method for the modeling of material damping characteristics in distributed parameter models which may be easily applied to models such as rod, plate, and beam equations. The general linear boundary value vibration equation is modified to incorporate hysteresis effects represented by complex stiffness using the transfer function approach proposed by Golla and Hughes. The governing characteristic equations are decoupled through separation of variables yielding solutions similar to those of undamped classical theory, allowing solution of the steady state as well as transient response. Example problems and solutions are provided demonstrating the similarity of the solutions to those of the classical theories and transient responses of nonviscous systems.
Model independent determination of the CKM phase γ using input from D{sup 0}-D̄{sup 0} mixing
Energy Technology Data Exchange (ETDEWEB)
Harnew, Samuel; Rademacker, Jonas [H H Wills Physics Laboratory, University of Bristol,Bristol (United Kingdom)
2015-03-31
We present a new, amplitude model-independent method to measure the CP violation parameter γ in B{sup −}→DK{sup −} and related decays. Information on charm interference parameters, usually obtained from charm threshold data, is obtained from charm mixing. By splitting the phase space of the D meson decay into several bins, enough information can be gained to measure γ without input from the charm threshold. We demonstrate the feasibility of this approach with a simulation study of B{sup −}→DK{sup −} with D→K{sup +}π{sup −}π{sup +}π{sup −}. We compare the performance of our novel approach to that of a previously proposed binned analysis which uses charm interference parameters obtained from threshold data. While both methods provide useful constraints, the combination of the two by far outperforms either of them applied on their own. Such an analysis would provide a highly competitive measurement of γ. Our simulation studies indicate, subject to assumptions about data yields and the amplitude structure of D{sup 0}→K{sup +}π{sup −}π{sup +}π{sup −}, a statistical uncertainty on γ of ∼12{sup ∘} with existing data and ∼4{sup ∘} for the LHCb-upgrade.
Katiyatiya, C. L. F.; Muchenje, V.; Mushunje, A.
2015-06-01
Seasonal variations in hair length, tick loads, cortisol levels, haematological parameters (HP) and temperature humidity index (THI) in Nguni cows of different colours raised in two low-input farms, and a commercial stud was determined. The sites were chosen based on their production systems, climatic characteristics and geographical locations. Zazulwana and Komga are low-input, humid-coastal areas, while Honeydale is a high-input, dry-inland Nguni stud farm. A total of 103 cows, grouped according to parity, location and coat colour, were used in the study. The effects of location, coat colour, hair length and season were used to determine tick loads on different body parts, cortisol levels and HP in blood from Nguni cows. Highest tick loads were recorded under the tail and the lowest on the head of each of the animals ( P Cortisol and THI were significantly lower ( P cortisol levels, THI, HP and tick loads on different body parts and heat stress in Nguni cows.
On the modeling of internal parameters in hyperelastic biological materials
Giantesio, Giulia
2016-01-01
This paper concerns the behavior of hyperelastic energies depending on an internal parameter. First, the situation in which the internal parameter is a function of the gradient of the deformation is presented. Second, two models where the parameter describes the activation of skeletal muscle tissue are analyzed. In those models, the activation parameter depends on the strain and it is important to consider the derivative of the parameter with respect to the strain in order to capture the proper behavior of the stress.
Kuik, Friderike; Lauer, Axel; Churkina, Galina; Denier van der Gon, Hugo A. C.; Fenner, Daniel; Mar, Kathleen A.; Butler, Tim M.
2016-12-01
Air pollution is the number one environmental cause of premature deaths in Europe. Despite extensive regulations, air pollution remains a challenge, especially in urban areas. For studying summertime air quality in the Berlin-Brandenburg region of Germany, the Weather Research and Forecasting Model with Chemistry (WRF-Chem) is set up and evaluated against meteorological and air quality observations from monitoring stations as well as from a field campaign conducted in 2014. The objective is to assess which resolution and level of detail in the input data is needed for simulating urban background air pollutant concentrations and their spatial distribution in the Berlin-Brandenburg area. The model setup includes three nested domains with horizontal resolutions of 15, 3 and 1 km and anthropogenic emissions from the TNO-MACC III inventory. We use RADM2 chemistry and the MADE/SORGAM aerosol scheme. Three sensitivity simulations are conducted updating input parameters to the single-layer urban canopy model based on structural data for Berlin, specifying land use classes on a sub-grid scale (mosaic option) and downscaling the original emissions to a resolution of ca. 1 km × 1 km for Berlin based on proxy data including traffic density and population density. The results show that the model simulates meteorology well, though urban 2 m temperature and urban wind speeds are biased high and nighttime mixing layer height is biased low in the base run with the settings described above. We show that the simulation of urban meteorology can be improved when specifying the input parameters to the urban model, and to a lesser extent when using the mosaic option. On average, ozone is simulated reasonably well, but maximum daily 8 h mean concentrations are underestimated, which is consistent with the results from previous modelling studies using the RADM2 chemical mechanism. Particulate matter is underestimated, which is partly due to an underestimation of secondary organic aerosols
Determining extreme parameter correlation in ground water models
DEFF Research Database (Denmark)
Hill, Mary Cole; Østerby, Ole
2003-01-01
In ground water flow system models with hydraulic-head observations but without significant imposed or observed flows, extreme parameter correlation generally exists. As a result, hydraulic conductivity and recharge parameters cannot be uniquely estimated. In complicated problems, such correlation...... correlation coefficients, but it required sensitivities that were one to two significant digits less accurate than those that required using parameter correlation coefficients; and (3) both the SVD and parameter correlation coefficients identified extremely correlated parameters better when the parameters...
Karandish, Fatemeh; Šimůnek, Jiří
2016-12-01
Soil water content (SWC) is a key factor in optimizing the usage of water resources in agriculture since it provides information to make an accurate estimation of crop water demand. Methods for predicting SWC that have simple data requirements are needed to achieve an optimal irrigation schedule, especially for various water-saving irrigation strategies that are required to resolve both food and water security issues under conditions of water shortages. Thus, a two-year field investigation was carried out to provide a dataset to compare the effectiveness of HYDRUS-2D, a physically-based numerical model, with various machine-learning models, including Multiple Linear Regressions (MLR), Adaptive Neuro-Fuzzy Inference Systems (ANFIS), and Support Vector Machines (SVM), for simulating time series of SWC data under water stress conditions. SWC was monitored using TDRs during the maize growing seasons of 2010 and 2011. Eight combinations of six, simple, independent parameters, including pan evaporation and average air temperature as atmospheric parameters, cumulative growth degree days (cGDD) and crop coefficient (Kc) as crop factors, and water deficit (WD) and irrigation depth (In) as crop stress factors, were adopted for the estimation of SWCs in the machine-learning models. Having Root Mean Square Errors (RMSE) in the range of 0.54-2.07 mm, HYDRUS-2D ranked first for the SWC estimation, while the ANFIS and SVM models with input datasets of cGDD, Kc, WD and In ranked next with RMSEs ranging from 1.27 to 1.9 mm and mean bias errors of -0.07 to 0.27 mm, respectively. However, the MLR models did not perform well for SWC forecasting, mainly due to non-linear changes of SWCs under the irrigation process. The results demonstrated that despite requiring only simple input data, the ANFIS and SVM models could be favorably used for SWC predictions under water stress conditions, especially when there is a lack of data. However, process-based numerical models are undoubtedly a
International trade inoperability input-output model (IT-IIM): theory and application.
Jung, Jeesang; Santos, Joost R; Haimes, Yacov Y
2009-01-01
The inoperability input-output model (IIM) has been used for analyzing disruptions due to man-made or natural disasters that can adversely affect the operation of economic systems or critical infrastructures. Taking economic perturbation for each sector as inputs, the IIM provides the degree of economic production impacts on all industry sectors as the outputs for the model. The current version of the IIM does not provide a separate analysis for the international trade component of the inoperability. If an important port of entry (e.g., Port of Los Angeles) is disrupted, then international trade inoperability becomes a highly relevant subject for analysis. To complement the current IIM, this article develops the International Trade-IIM (IT-IIM). The IT-IIM investigates the resulting international trade inoperability for all industry sectors resulting from disruptions to a major port of entry. Similar to traditional IIM analysis, the inoperability metrics that the IT-IIM provides can be used to prioritize economic sectors based on the losses they could potentially incur. The IT-IIM is used to analyze two types of direct perturbations: (1) the reduced capacity of ports of entry, including harbors and airports (e.g., a shutdown of any port of entry); and (2) restrictions on commercial goods that foreign countries trade with the base nation (e.g., embargo).
Multiregional input-output model for the evaluation of Spanish water flows.
Cazcarro, Ignacio; Duarte, Rosa; Sánchez Chóliz, Julio
2013-01-01
We construct a multiregional input-output model for Spain, in order to evaluate the pressures on the water resources, virtual water flows, and water footprints of the regions, and the water impact of trade relationships within Spain and abroad. The study is framed with those interregional input-output models constructed to study water flows and impacts of regions in China, Australia, Mexico, or the UK. To build our database, we reconcile regional IO tables, national and regional accountancy of Spain, trade and water data. Results show an important imbalance between origin of water resources and final destination, with significant water pressures in the South, Mediterranean, and some central regions. The most populated and dynamic regions of Madrid and Barcelona are important drivers of water consumption in Spain. Main virtual water exporters are the South and Central agrarian regions: Andalusia, Castile-La Mancha, Castile-Leon, Aragon, and Extremadura, while the main virtual water importers are the industrialized regions of Madrid, Basque country, and the Mediterranean coast. The paper shows the different location of direct and indirect consumers of water in Spain and how the economic trade and consumption pattern of certain areas has significant impacts on the availability of water resources in other different and often drier regions.
A Water-Withdrawal Input-Output Model of the Indian Economy.
Bogra, Shelly; Bakshi, Bhavik R; Mathur, Ritu
2016-02-02
Managing freshwater allocation for a highly populated and growing economy like India can benefit from knowledge about the effect of economic activities. This study transforms the 2003-2004 economic input-output (IO) table of India into a water withdrawal input-output model to quantify direct and indirect flows. This unique model is based on a comprehensive database compiled from diverse public sources, and estimates direct and indirect water withdrawal of all economic sectors. It distinguishes between green (rainfall), blue (surface and ground), and scarce groundwater. Results indicate that the total direct water withdrawal is nearly 3052 billion cubic meter (BCM) and 96% of this is used in agriculture sectors with the contribution of direct green water being about 1145 BCM, excluding forestry. Apart from 727 BCM direct blue water withdrawal for agricultural, other significant users include "Electricity" with 64 BCM, "Water supply" with 44 BCM and other industrial sectors with nearly 14 BCM. "Construction", "miscellaneous food products"; "Hotels and restaurants"; "Paper, paper products, and newsprint" are other significant indirect withdrawers. The net virtual water import is found to be insignificant compared to direct water used in agriculture nationally, while scarce ground water associated with crops is largely contributed by northern states.
Model comparisons and genetic and environmental parameter ...
African Journals Online (AJOL)
arc
South African Journal of Animal Science 2005, 35 (1) ... Genetic and environmental parameters were estimated for pre- and post-weaning average daily gain ..... and BWT (and medium maternal genetic correlations) indicates that these traits ...
Bowden, Gavin J.; Maier, Holger R.; Dandy, Graeme C.
2005-01-01
This paper is the second of a two-part series in this issue that presents a methodology for determining an appropriate set of model inputs for artificial neural network (ANN) models in hydrologic applications. The first paper presented two input determination methods. The first method utilises a measure of dependence known as the partial mutual information (PMI) criterion to select significant model inputs. The second method utilises a self-organising map (SOM) to remove redundant input variables, and a hybrid genetic algorithm (GA) and general regression neural network (GRNN) to select the inputs that have a significant influence on the model's forecast. In the first paper, both methods were applied to synthetic data sets and were shown to lead to a set of appropriate ANN model inputs. To verify the proposed techniques, it is important that they are applied to a real-world case study. In this paper, the PMI algorithm and the SOM-GAGRNN are used to find suitable inputs to an ANN model for forecasting salinity in the River Murray at Murray Bridge, South Australia. The proposed methods are also compared with two methods used in previous studies, for the same case study. The two proposed methods were found to lead to more parsimonious models with a lower forecasting error than the models developed using the methods from previous studies. To verify the robustness of each of the ANNs developed using the proposed methodology, a real-time forecasting simulation was conducted. This validation data set consisted of independent data from a six-year period from 1992 to 1998. The ANN developed using the inputs identified by the stepwise PMI algorithm was found to be the most robust for this validation set. The PMI scores obtained using the stepwise PMI algorithm revealed useful information about the order of importance of each significant input.
NEW DOCTORAL DEGREE Parameter estimation problem in the Weibull model
Marković, Darija
2009-01-01
In this dissertation we consider the problem of the existence of best parameters in the Weibull model, one of the most widely used statistical models in reliability theory and life data theory. Particular attention is given to a 3-parameter Weibull model. We have listed some of the many applications of this model. We have described some of the classical methods for estimating parameters of the Weibull model, two graphical methods (Weibull probability plot and hazard plot), and two analyt...
Energy Technology Data Exchange (ETDEWEB)
Dinca, Laurian; Aldemir, Tunc; Rizzoni, Giorgio
1999-06-01
A probabilistic approach is presented which can be used for the estimation of system parameters and unmonitored state variables towards model-based fault diagnosis in dynamic systems. The method can be used with any type of input-output model and can accommodate noisy data and/or parameter/modeling uncertainties. The methodology is based on Markovian representation of system dynamics in discretized state space. The example system used for the illustration of the methodology focuses on the intake, fueling, combustion and exhaust components of internal combustion engines. The results show that the methodology is capable of estimating the system parameters and tracking the unmonitored dynamic variables within user-specified magnitude intervals (which may reflect noise in the monitored data, random changes in the parameters or modeling uncertainties in general) within data collection time and hence has potential for on-line implementation.
Time-Varying FOPDT Modeling and On-line Parameter Identification
DEFF Research Database (Denmark)
Yang, Zhenyu; Sun, Zhen
2013-01-01
A type of Time-Varying First-Order Plus Dead-Time (TV-FOPDT) model is extended from SISO format into a MISO version by explicitly taking the disturbance input into consideration. Correspondingly, a set of on-line parameter identification algorithms oriented to MISO TV-FOPDT model are proposed bas...... are firstly illustrated through a numerical example, and then applied to investigate transient superheat dynamic modeling in a supermarket refrigeration system....... on the Mixed-Integer-Nonlinear Programming, Least-Mean-Square and sliding window techniques. The proposed approaches can simultaneously estimate the time-dependent system parameters, as well as the unknown disturbance input if it is the case, in an on-line manner. The proposed concepts and algorithms...
Limited fetch revisited: comparison of wind input terms in surface waves modeling
Andrei, Pushkarev
2015-01-01
The results of numerical solution of the Hasselmann kinetic equation ($HE$) for wind driven sea spectra in the fetch limited geometry are presented. Five versions of the source functions, including recently introduced ZRP model, have been studied for the exact expression of Snl and high-frequency implicit dissipation due to wave-breaking. Four out of five experiments were done in the absence of spectral peak dissipation for various Sin terms. They demonstrated the dominance of quadruplet wave-wave interaction in the energy balance and the formation of self-similar regimes of unlimited wave energy growth along the fetch. Between them was ZRP model, which showed especially good agreement with the dozen of field observations performed in the seas and lakes since 1971. The fifth, WAM3 wind input term experiment, used additional spectral peak dissipation and reproduced the results of previous similar numerical simulation, but was in a good agreement with the field experiments only for moderate fetches, demonstrati...
A Regularized SNPOM for Stable Parameter Estimation of RBF-AR(X) Model.
Zeng, Xiaoyong; Peng, Hui; Zhou, Feng
2017-01-20
Recently, the radial basis function (RBF) network-style coefficients AutoRegressive (with exogenous inputs) [RBF-AR(X)] model identified by the structured nonlinear parameter optimization method (SNPOM) has attracted considerable interest because of its significant performance in nonlinear system modeling. However, this promising technique may occasionally confront the problem that the parameters are divergent in the optimization process, which may be a potential issue ignored by most researchers. In this paper, a regularized SNPOM, together with the regularization parameter detection technique, is presented to estimate the parameters of RBF-AR(X) models. This approach first separates the parameters of an RBF-AR(X) model into a linear parameters set and a nonlinear parameters set, and then combines a gradient-based nonlinear optimization algorithm for estimating the nonlinear parameters and the regularized least squares method for estimating the linear parameters. Several examples demonstrate that the proposed approach is effective to cope with the potential unstable problem in the parameters search process, and may also yield better or similar multistep forecasting accuracy and better robustness than the previous method.
Development of a General Form CO_{2} and Brine Flux Input Model
Energy Technology Data Exchange (ETDEWEB)
Mansoor, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sun, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Carroll, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2014-08-01
The National Risk Assessment Partnership (NRAP) project is developing a science-based toolset for the quantitative analysis of the potential risks associated with changes in groundwater chemistry from CO_{2} injection. In order to address uncertainty probabilistically, NRAP is developing efficient, reduced-order models (ROMs) as part of its approach. These ROMs are built from detailed, physics-based process models to provide confidence in the predictions over a range of conditions. The ROMs are designed to reproduce accurately the predictions from the computationally intensive process models at a fraction of the computational time, thereby allowing the utilization of Monte Carlo methods to probe variability in key parameters. This report presents the procedures used to develop a generalized model for CO_{2} and brine leakage fluxes based on the output of a numerical wellbore simulation. The resulting generalized parameters and ranges reported here will be used for the development of third-generation groundwater ROMs.
Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W
2014-01-01
A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection.
Loss of GABAergic inputs in APP/PS1 mouse model of Alzheimer's disease
Directory of Open Access Journals (Sweden)
Tutu Oyelami
2014-04-01
Full Text Available Alzheimer's disease (AD is characterized by symptoms which include seizures, sleep disruption, loss of memory as well as anxiety in patients. Of particular importance is the possibility of preventing the progressive loss of neuronal projections in the disease. Transgenic mice overexpressing EOFAD mutant PS1 (L166P and mutant APP (APP KM670/671NL Swedish (APP/PS1 develop a very early and robust Amyloid pathology and display synaptic plasticity impairments and cognitive dysfunction. Here we investigated GABAergic neurotransmission, using multi-electrode array (MEA technology and pharmacological manipulation to quantify the effect of GABA Blockers on field excitatory postsynaptic potentials (fEPSP, and immunostaining of GABAergic neurons. Using MEA technology we confirm impaired LTP induction by high frequency stimulation in APPPS1 hippocampal CA1 region that was associated with reduced alteration of the pair pulse ratio after LTP induction. Synaptic dysfunction was also observed under manipulation of external Calcium concentration and input-output curve. Electrophysiological recordings from brain slice of CA1 hippocampus area, in the presence of GABAergic receptors blockers cocktails further demonstrated significant reduction in the GABAergic inputs in APP/PS1 mice. Moreover, immunostaining of GAD65 a specific marker for GABAergic neurons revealed reduction of the GABAergic inputs in CA1 area of the hippocampus. These results might be linked to increased seizure sensitivity, premature death and cognitive dysfunction in this animal model of AD. Further in depth analysis of GABAergic dysfunction in APP/PS1 mice is required and may open new perspectives for AD therapy by restoring GABAergic function.
Sensitivity of a Shallow-Water Model to Parameters
Kazantsev, Eugene
2011-01-01
An adjoint based technique is applied to a shallow water model in order to estimate the influence of the model's parameters on the solution. Among parameters the bottom topography, initial conditions, boundary conditions on rigid boundaries, viscosity coefficients Coriolis parameter and the amplitude of the wind stress tension are considered. Their influence is analyzed from three points of view: 1. flexibility of the model with respect to a parameter that is related to the lowest value of the cost function that can be obtained in the data assimilation experiment that controls this parameter; 2. possibility to improve the model by the parameter's control, i.e. whether the solution with the optimal parameter remains close to observations after the end of control; 3. sensitivity of the model solution to the parameter in a classical sense. That implies the analysis of the sensitivity estimates and their comparison with each other and with the local Lyapunov exponents that characterize the sensitivity of the mode...
Vrugt, J.A.; Braak, ter C.J.F.; Clark, M.P.; Hyman, J.M.; Robinson, B.A.
2008-01-01
There is increasing consensus in the hydrologic literature that an appropriate framework for streamflow forecasting and simulation should include explicit recognition of forcing and parameter and model structural error. This paper presents a novel Markov chain Monte Carlo (MCMC) sampler, entitled
Vrugt, J.A.; Braak, ter C.J.F.; Clark, M.P.; Hyman, J.M.; Robinson, B.A.
2008-01-01
There is increasing consensus in the hydrologic literature that an appropriate framework for streamflow forecasting and simulation should include explicit recognition of forcing and parameter and model structural error. This paper presents a novel Markov chain Monte Carlo (MCMC) sampler, entitled di
Model algorithm control using neural networks for input delayed nonlinear control system
Institute of Scientific and Technical Information of China (English)
Yuanliang Zhang; Kil To Chong
2015-01-01
The performance of the model algorithm control method is partial y based on the accuracy of the system’s model. It is diffi-cult to obtain a good model of a nonlinear system, especial y when the nonlinearity is high. Neural networks have the ability to“learn”the characteristics of a system through nonlinear mapping to rep-resent nonlinear functions as wel as their inverse functions. This paper presents a model algorithm control method using neural net-works for nonlinear time delay systems. Two neural networks are used in the control scheme. One neural network is trained as the model of the nonlinear time delay system, and the other one pro-duces the control inputs. The neural networks are combined with the model algorithm control method to control the nonlinear time delay systems. Three examples are used to il ustrate the proposed control method. The simulation results show that the proposed control method has a good control performance for nonlinear time delay systems.
Teams in organizations: from input-process-output models to IMOI models.
Ilgen, Daniel R; Hollenbeck, John R; Johnson, Michael; Jundt, Dustin
2005-01-01
This review examines research and theory relevant to work groups and teams typically embedded in organizations and existing over time, although many studies reviewed were conducted in other settings, including the laboratory. Research was organized around a two-dimensional system based on time and the nature of explanatory mechanisms that mediated between team inputs and outcomes. These mechanisms were affective, behavioral, cognitive, or some combination of the three. Recent theoretical and methodological work is discussed that has advanced our understanding of teams as complex, multilevel systems that function over time, tasks, and contexts. The state of both the empirical and theoretical work is compared as to its impact on present knowledge and future directions.
Estimation of shape model parameters for 3D surfaces
DEFF Research Database (Denmark)
Erbou, Søren Gylling Hemmingsen; Darkner, Sune; Fripp, Jurgen;
2008-01-01
Statistical shape models are widely used as a compact way of representing shape variation. Fitting a shape model to unseen data enables characterizing the data in terms of the model parameters. In this paper a Gauss-Newton optimization scheme is proposed to estimate shape model parameters of 3D s...
Directory of Open Access Journals (Sweden)
B. Bisselink
2016-12-01
New hydrological insights: Results indicate large discrepancies in terms of the linear correlation (r, bias (β and variability (γ between the observed and simulated streamflows when using different precipitation estimates as model input. The best model performance was obtained with products which ingest gauge data for bias correction. However, catchment behavior was difficult to be captured using a single parameter set and to obtain a single robust parameter set for each catchment, which indicate that transposing model parameters should be carried out with caution. Model parameters depend on the precipitation characteristics of the calibration period and should therefore only be used in target periods with similar precipitation characteristics (wet/dry.
Directory of Open Access Journals (Sweden)
Jui-Yang eChang
2012-11-01
Full Text Available A multivariate autoregressive model with exogenous inputs is developed for describing the cortical interactions excited by direct electrical current stimulation of the cortex. Current stimulation is challenging to model because it excites neurons in multiple locations both near and distant to the stimulation site. The approach presented here models these effects using an exogenous input that is passed through a bank of filters, one for each channel. The filtered input and a random input excite a multivariate autoregressive system describing the interactions between cortical activity at the recording sites. The exogenous input filter coefficients, the autoregressive coefficients, and random input characteristics are estimated from the measured activity due to current stimulation. The effectiveness of the approach is demonstrated using intracranial recordings from three surgical epilepsy patients. We evaluate models for wakefulness and NREM sleep in these patients with two stimulation levels in one patient and two stimulation sites in another resulting in a total of ten datasets. Excellent agreement between measured and model-predicted evoked responses is obtained across all datasets. Furthermore, one-step prediction is used to show that the model also describes dynamics in prestimulus and evoked recordings. We also compare integrated information --- a measure of intracortical communication thought to reflect the capacity for consciousness --- associated with the network model in wakefulness and sleep. As predicted, higher information integration is found in wakefulness than in sleep for all five cases.
Chang, Jui-Yang; Pigorini, Andrea; Massimini, Marcello; Tononi, Giulio; Nobili, Lino; Van Veen, Barry D
2012-01-01
A multivariate autoregressive (MVAR) model with exogenous inputs (MVARX) is developed for describing the cortical interactions excited by direct electrical current stimulation of the cortex. Current stimulation is challenging to model because it excites neurons in multiple locations both near and distant to the stimulation site. The approach presented here models these effects using an exogenous input that is passed through a bank of filters, one for each channel. The filtered input and a random input excite a MVAR system describing the interactions between cortical activity at the recording sites. The exogenous input filter coefficients, the autoregressive coefficients, and random input characteristics are estimated from the measured activity due to current stimulation. The effectiveness of the approach is demonstrated using intracranial recordings from three surgical epilepsy patients. We evaluate models for wakefulness and NREM sleep in these patients with two stimulation levels in one patient and two stimulation sites in another resulting in a total of 10 datasets. Excellent agreement between measured and model-predicted evoked responses is obtained across all datasets. Furthermore, one-step prediction is used to show that the model also describes dynamics in pre-stimulus and evoked recordings. We also compare integrated information-a measure of intracortical communication thought to reflect the capacity for consciousness-associated with the network model in wakefulness and sleep. As predicted, higher information integration is found in wakefulness than in sleep for all five cases.
Compositional modelling of distributed-parameter systems
Maschke, Bernhard; Schaft, van der Arjan; Lamnabhi-Lagarrigue, F.; Loría, A.; Panteley, E.
2005-01-01
The Hamiltonian formulation of distributed-parameter systems has been a challenging reserach area for quite some time. (A nice introduction, especially with respect to systems stemming from fluid dynamics, can be found in [26], where also a historical account is provided.) The identification of the
Parameter Estimation and Experimental Design in Groundwater Modeling
Institute of Scientific and Technical Information of China (English)
SUN Ne-zheng
2004-01-01
This paper reviews the latest developments on parameter estimation and experimental design in the field of groundwater modeling. Special considerations are given when the structure of the identified parameter is complex and unknown. A new methodology for constructing useful groundwater models is described, which is based on the quantitative relationships among the complexity of model structure, the identifiability of parameter, the sufficiency of data, and the reliability of model application.
Comparison of several climate indices as inputs in modelling of the Baltic Sea runoff
Energy Technology Data Exchange (ETDEWEB)
Hanninen, J.; Vuorinen, I. [Turku Univ. (Finland). Archipelaco Research Inst.], e-mail: jari.hanninen@utu.fi
2012-11-01
Using Transfer function (TF) models, we have earlier presented a chain of events between changes in the North Atlantic Oscillation (NAO) and their oceanographical and ecological consequences in the Baltic Sea. Here we tested whether other climate indices as inputs would improve TF models, and our understanding of the Baltic Sea ecosystem. Besides NAO, the predictors were the Arctic Oscillation (AO), sea-level air pressures at Iceland (SLP), and wind speeds at Hoburg (Gotland). All indices produced good TF models when the total riverine runoff to the Baltic Sea was used as a modelling basis. AO was not applicable in all study areas, showing a delay of about half a year between climate and runoff events, connected with freezing and melting time of ice and snow in the northern catchment area of the Baltic Sea. NAO appeared to be most useful modelling tool as its area of applicability was the widest of the tested indices, and the time lag between climate and runoff events was the shortest. SLP and Hoburg wind speeds showed largely same results as NAO, but with smaller areal applicability. Thus AO and NAO were both mostly contributing to the general understanding of climate control of runoff events in the Baltic Sea ecosystem. (orig.)
Reconstruction of rocks petrophysical properties as input data for reservoir modeling
Cantucci, B.; Montegrossi, G.; Lucci, F.; Quattrocchi, F.
2016-11-01
The worldwide increasing energy demand triggered studies focused on defining the underground energy potential even in areas previously discharged or neglected. Nowadays, geological gas storage (CO2 and/or CH4) and geothermal energy are considered strategic for low-carbon energy development. A widespread and safe application of these technologies needs an accurate characterization of the underground, in terms of geology, hydrogeology, geochemistry, and geomechanics. However, during prefeasibility study-stage, the limited number of available direct measurements of reservoirs, and the high costs of reopening closed deep wells must be taken into account. The aim of this work is to overcome these limits, proposing a new methodology to reconstruct vertical profiles, from surface to reservoir base, of: (i) thermal capacity, (ii) thermal conductivity, (iii) porosity, and (iv) permeability, through integration of well-log information, petrographic observations on inland outcropping samples, and flow and heat transport modeling. As case study to test our procedure we selected a deep structure, located in the medium Tyrrhenian Sea (Italy). Obtained results are consistent with measured data, confirming the validity of the proposed model. Notwithstanding intrinsic limitations due to manual calibration of the model with measured data, this methodology represents an useful tool for reservoir and geochemical modelers that need to define petrophysical input data for underground modeling before the well reopening.
Bayesian approach to decompression sickness model parameter estimation.
Howle, L E; Weber, P W; Nichols, J M
2017-03-01
We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters.
Parameters identification for GTN model and their verification on 42CrMo4 steel
Energy Technology Data Exchange (ETDEWEB)
Kozak, V.; Vlcek, L. [Inst. of Physics of Materials, AS of CR, Brno (Czech Republic)
2005-07-01
The base of this paper is exact measurement of deformation and fracture material characteristics in laboratory, evaluation of these parameters and their application in models of finite element analysis modelling the fracture behaviour of components with defects. The base of the work is dealing with ductile fracture of forget steel 42CrMo4. R-curve is modelled by 3D FEM using WARP3D and Abaqus. Crack extension is simulated in sense of element extinction algorithms. Determination of micro-mechanical parameters is based on combination of tensile tests and microscopic observation. Input parameters for the next computation and simulation were received on the base of image analysis, namely f{sub N} and f{sub o}. The possibility of transferring these parameters to another specimen is discussed. (orig.)
Spectral tensor parameters for wind turbine load modeling from forested and agricultural landscapes
DEFF Research Database (Denmark)
Chougule, Abhijit S.; Mann, Jakob; Segalini, A.
2015-01-01
over a forested and an agricultural landscape were used to calculate the model parameters for neutral, slightly stable and slightly unstable atmospheric conditions for a selected wind speed interval. The dissipation rate above the forest was nine times that at the agricultural site. No significant......A velocity spectral tensor model was evaluated from the single-point measurements of wind speed. The model contains three parameters representing the dissipation rate of specific turbulent kinetic energy, a turbulence length scale and the turbulence anisotropy. Sonic anemometer measurements taken...... constant with height at the forest site, whereas the turbulence became more isotropic with height for the agricultural site. Using the three parameters as inputs, we quantified the performance of the model in coherence predictions for vertical separations. The model coherence of all the three velocity...
Anas, Ridwan; Tamin, Ofyar; Wibowo, Sony S.
2016-09-01
The purpose of this study is to identify the relationships between infrastructure improvement and economic growth in the surrounding region. Traditionally, microeconomic and macroeconomic analyses are the mostly used tools for analyzing the linkage between transportation sectors and economic growth but offer little clues to the mechanisms linking transport improvements and the broader economy impacts. This study will estimate the broader economic benefits of the new transportation infrastructure investment, Cipularangtollway in West Java province, Indonesia, to the region connected (Bandung district) using Input-Output model. The result show the decrease of freight transportation costs by at 17 % and the increase of 1.2 % of Bandung District's GDP after the operation of Cipularangtollway.
Anas, Ridwan; Tamin, Ofyar; Wibowo, Sony S.
2016-08-01
The purpose of this study is to identify the relationships between infrastructure improvement and economic growth in the surrounding region. Traditionally, microeconomic and macroeconomic analyses are the mostly used tools for analyzing the linkage between transportation sectors and economic growth but offer little clues to the mechanisms linking transport improvements and the broader economy impacts. This study will estimate the broader economic benefits of the new transportation infrastructure investment, Cipularangtollway in West Java province, Indonesia, to the region connected (Bandung district) using Input-Output model. The result show the decrease of freight transportation costs by at 17 % and the increase of 1.2 % of Bandung District's GDP after the operation of Cipularangtollway.
Directory of Open Access Journals (Sweden)
Faa Jeng Lin
2016-11-01
Full Text Available This paper outlines the modeling and controller design of a novel two-stage photovoltaic (PV micro inverter (MI that eliminates the need for an electrolytic capacitor (E-cap and input current sensor. The proposed MI uses an active-clamped current-fed push-pull DC-DC converter, cascaded with a full-bridge inverter. Three strategies are proposed to cope with the inherent limitations of a two-stage PV MI: (i high-speed DC bus voltage regulation using an integrator to deal with the 2nd harmonic voltage ripples found in single-phase systems; (ii inclusion of a small film capacitor in the DC bus to achieve ripple-free PV voltage; (iii improved incremental conductance (INC maximum power point tracking (MPPT without the need for current sensing by the PV module. Simulation and experimental results demonstrate the efficacy of the proposed system.
2017-05-01
that future research should focus predominantly on determining degrada- tion rates of EC in groundwater, vadose zone, and surface soil , in that or- der...Agency’s Office of Pollution Prevention Toxics and Syracuse Research Corporation (http://www.epa.gov/oppt/exposure/pubs/episuitedl.htm). EPI Suite is... paper by Richard and Weidhaas (2014b), they reported results of degradation of IMX-101 components in soil . Control plots, where no plants and no
Hallegatte, Stéphane
2008-06-01
This article proposes a new modeling framework to investigate the consequences of natural disasters and the following reconstruction phase. Based on input-output tables, its originalities are (1) the taking into account of sector production capacities and of both forward and backward propagations within the economic system; and (2) the introduction of adaptive behaviors. The model is used to simulate the response of the economy of Louisiana to the landfall of Katrina. The model is found consistent with available data, and provides two important insights. First, economic processes exacerbate direct losses, and total costs are estimated at $149 billion, for direct losses equal to $107 billion. When exploring the impacts of other possible disasters, it is found that total losses due to a disaster affecting Louisiana increase nonlinearly with respect to direct losses when the latter exceed $50 billion. When direct losses exceed $200 billion, for instance, total losses are twice as large as direct losses. For risk management, therefore, direct losses are insufficient measures of disaster consequences. Second, positive and negative backward propagation mechanisms are essential for the assessment of disaster consequences, and the taking into account of production capacities is necessary to avoid overestimating the positive effects of reconstruction. A systematic sensitivity analysis shows that, among all parameters, the overproduction capacity in the construction sector and the adaptation characteristic time are the most important.
Parameter and Uncertainty Estimation in Groundwater Modelling
DEFF Research Database (Denmark)
Jensen, Jacob Birk
The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... and uncertainty estimation. Essential issues relating to calibration are discussed. The classical regression methods are described; however, the main focus is on the Generalized Likelihood Uncertainty Estimation (GLUE) methodology. The next two chapters describe case studies in which the GLUE methodology...
Parameter redundancy in discrete state‐space and integrated models
McCrea, Rachel S.
2016-01-01
Discrete state‐space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state‐space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state‐space models using discrete analogues of methods for continuous state‐space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. PMID:27362826
Parameter redundancy in discrete state-space and integrated models.
Cole, Diana J; McCrea, Rachel S
2016-09-01
Discrete state-space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state-space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state-space models using discrete analogues of methods for continuous state-space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Effect of Manure vs. Fertilizer Inputs on Productivity of Forage Crop Models
Directory of Open Access Journals (Sweden)
Pasquale Martiniello
2011-06-01
Full Text Available Manure produced by livestock activity is a dangerous product capable of causing serious environmental pollution. Agronomic management practices on the use of manure may transform the target from a waste to a resource product. Experiments performed on comparison of manure with standard chemical fertilizers (CF were studied under a double cropping per year regime (alfalfa, model I; Italian ryegrass-corn, model II; barley-seed sorghum, model III; and horse-bean-silage sorghum, model IV. The total amount of manure applied in the annual forage crops of the model II, III and IV was 158, 140 and 80 m3 ha−1, respectively. The manure applied to soil by broadcast and injection procedure provides an amount of nitrogen equal to that supplied by CF. The effect of manure applications on animal feeding production and biochemical soil characteristics was related to the models. The weather condition and manures and CF showed small interaction among treatments. The number of MFU ha−1 of biomass crop gross product produced in autumn and spring sowing models under manure applications was 11,769, 20,525, 11,342, 21,397 in models I through IV, respectively. The reduction of MFU ha−1 under CF ranges from 10.7% to 13.2% those of the manure models. The effect of manure on organic carbon and total nitrogen of topsoil, compared to model I, stressed the parameters as CF whose amount was higher in models II and III than model IV. In term of percentage the organic carbon and total nitrogen of model I and treatment with manure was reduced by about 18.5 and 21.9% in model II and model III and 8.8 and 6.3% in model IV, respectively. Manure management may substitute CF without reducing gross production and sustainability of cropping systems, thus allowing the opportunity to recycle the waste product for animal forage feeding.
Effect of manure vs. fertilizer inputs on productivity of forage crop models.
Annicchiarico, Giovanni; Caternolo, Giovanni; Rossi, Emanuela; Martiniello, Pasquale
2011-06-01
Manure produced by livestock activity is a dangerous product capable of causing serious environmental pollution. Agronomic management practices on the use of manure may transform the target from a waste to a resource product. Experiments performed on comparison of manure with standard chemical fertilizers (CF) were studied under a double cropping per year regime (alfalfa, model I; Italian ryegrass-corn, model II; barley-seed sorghum, model III; and horse-bean-silage sorghum, model IV). The total amount of manure applied in the annual forage crops of the model II, III and IV was 158, 140 and 80 m3 ha(-1), respectively. The manure applied to soil by broadcast and injection procedure provides an amount of nitrogen equal to that supplied by CF. The effect of manure applications on animal feeding production and biochemical soil characteristics was related to the models. The weather condition and manures and CF showed small interaction among treatments. The number of MFU ha(-1) of biomass crop gross product produced in autumn and spring sowing models under manure applications was 11,769, 20,525, 11,342, 21,397 in models I through IV, respectively. The reduction of MFU ha(-1) under CF ranges from 10.7% to 13.2% those of the manure models. The effect of manure on organic carbon and total nitrogen of topsoil, compared to model I, stressed the parameters as CF whose amount was higher in models II and III than model IV. In term of percentage the organic carbon and total nitrogen of model I and treatment with manure was reduced by about 18.5 and 21.9% in model II and model III and 8.8 and 6.3% in model IV, respectively. Manure management may substitute CF without reducing gross production and sustainability of cropping systems, thus allowing the opportunity to recycle the waste product for animal forage feeding.
Scaling precipitation input to distributed hydrological models by measured snow distribution
Voegeli, Christian; Lehning, Michael; Wever, Nander; Bavay, Mathias; Bühler, Yves; Marty, Mauro; Molnar, Peter
2016-04-01
Precise knowledge about the snow distribution in alpine terrain is crucial for various applications such as flood risk assessment, avalanche warning or water supply and hydropower. To simulate the seasonal snow cover development in alpine terrain, the spatially distributed, physics-based model Alpine3D is suitable. The model is often driven by spatial interpolations from automatic weather stations (AWS). As AWS are sparsely spread, the data needs to be interpolated, leading to errors in the spatial distribution of the snow cover - especially on subcatchment scale. With the recent advances in remote sensing techniques, maps of snow depth can be acquired with high spatial resolution and vertical accuracy. Here we use maps of the snow depth distribution, calculated from summer and winter digital surface models acquired with the airborne opto-electronic scanner ADS to preprocess and redistribute precipitation input data for Alpine3D to improve the accuracy of spatial distribution of snow depth simulations. A differentiation between liquid and solid precipitation is made, to account for different precipitation patterns that can be expected from rain and snowfall. For liquid precipitation, only large scale distribution patterns are applied to distribute precipitation in the simulation domain. For solid precipitation, an additional small scale distribution, based on the ADS data, is applied. The large scale patterns are generated using AWS measurements interpolated over the domain. The small scale patterns are generated by redistributing the large scale precipitation according to the relative snow depth in the ADS dataset. The determination of the precipitation phase is done using an air temperature threshold. Using this simple approach to redistribute precipitation, the accuracy of spatial snow distribution could be improved significantly. The standard deviation of absolute snow depth error could be reduced by a factor of 2 to less than 20 cm for the season 2011/12. The
An automatic and effective parameter optimization method for model tuning
Directory of Open Access Journals (Sweden)
T. Zhang
2015-11-01
simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.
Lumpy - an interactive Lumped Parameter Modeling code based on MS Access and MS Excel.
Suckow, A.
2012-04-01
Several tracers for dating groundwater (18O/2H, 3H, CFCs, SF6, 85Kr) need lumped parameter modeling (LPM) to convert measured values into numbers with unit time. Other tracers (T/3He, 39Ar, 14C, 81Kr) allow the computation of apparent ages with a mathematical formula using radioactive decay without defining the age mixture that any groundwater sample represents. Also interpretation of the latter profits significantly from LPM tools that allow forward modeling of input time series to measurable output values assuming different age distributions and mixtures in the sample. This talk presents a Lumped Parameter Modeling code, Lumpy, combining up to two LPMs in parallel. The code is standalone and freeware. It is based on MS Access and Access Basic (AB) and allows using any number of measurements for both input time series and output measurements, with any, not necessarily constant, time resolution. Several tracers, also comprising very different timescales like e.g. the combination of 18O, CFCs and 14C, can be modeled, displayed and fitted simultaneously. Lumpy allows for each of the two parallel models the choice of the following age distributions: Exponential Piston flow Model (EPM), Linear Piston flow Model (LPM), Dispersion Model (DM), Piston flow Model (PM) and Gamma Model (GM). Concerning input functions, Lumpy allows delaying (passage through the unsaturated zone) shifting by a constant value (converting 18O data from a GNIP station to a different altitude), multiplying by a constant value (geochemical reduction of initial 14C) and the definition of a constant input value prior to the input time series (pre-bomb tritium). Lumpy also allows underground tracer production (4He or 39Ar) and the computation of a daughter product (tritiugenic 3He) as well as partial loss of the daughter product (partial re-equilibration of 3He). These additional parameters and the input functions can be defined independently for the two sub-LPMs to represent two different recharge
Ternary interaction parameters in calphad solution models
Energy Technology Data Exchange (ETDEWEB)
Eleno, Luiz T.F., E-mail: luizeleno@usp.br [Universidade de Sao Paulo (USP), SP (Brazil). Instituto de Fisica; Schön, Claudio G., E-mail: schoen@usp.br [Universidade de Sao Paulo (USP), SP (Brazil). Computational Materials Science Laboratory. Department of Metallurgical and Materials Engineering
2014-07-01
For random, diluted, multicomponent solutions, the excess chemical potentials can be expanded in power series of the composition, with coefficients that are pressure- and temperature-dependent. For a binary system, this approach is equivalent to using polynomial truncated expansions, such as the Redlich-Kister series for describing integral thermodynamic quantities. For ternary systems, an equivalent expansion of the excess chemical potentials clearly justifies the inclusion of ternary interaction parameters, which arise naturally in the form of correction terms in higher-order power expansions. To demonstrate this, we carry out truncated polynomial expansions of the excess chemical potential up to the sixth power of the composition variables. (author)
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
In this paper, an extended Kendall model for the priority scheduling input-line group output with multi-channel in Asynchronous Transfer Mode (ATM) exchange system is proposed and then the mean method is used to model mathematically the non-typical non-anticipative PRiority service (PR) model. Compared with the typical and non-anticipative PR model, it expresses the characteristics of the priority scheduling input-line group output with multi-channel in ATM exchange system. The simulation experiment shows that this model can improve the HOL block and the performance of input-queued ATM switch network dramatically. This model has a better developing prospect in ATM exchange system.
Institute of Scientific and Technical Information of China (English)
Zheng Zhong; Song Shenmin
2014-01-01
To synchronize the attitude of a spacecraft formation flying system, three novel auton-omous control schemes are proposed to deal with the issue in this paper. The first one is an ideal autonomous attitude coordinated controller, which is applied to address the case with certain mod-els and no disturbance. The second one is a robust adaptive attitude coordinated controller, which aims to tackle the case with external disturbances and model uncertainties. The last one is a filtered robust adaptive attitude coordinated controller, which is used to overcome the case with input con-straint, model uncertainties, and external disturbances. The above three controllers do not need any external tracking signal and only require angular velocity and relative orientation between a space-craft and its neighbors. Besides, the relative information is represented in the body frame of each spacecraft. The controllers are proved to be able to result in asymptotical stability almost every-where. Numerical simulation results show that the proposed three approaches are effective for atti-tude coordination in a spacecraft formation flying system.
Nuclear inputs of key iron isotopes for core-collapse modeling and simulation
Nabi, Jameel-Un
2014-01-01
From the modeling and simulation results of presupernova evolution of massive stars, it was found that isotopes of iron, $^{54,55,56}$Fe, play a significant role inside the stellar cores, primarily decreasing the electron-to-baryon ratio ($Y_{e}$) mainly via electron capture processes thereby reducing the pressure support. The neutrinos produced, as a result of these capture processes, are transparent to the stellar matter and assist in cooling the core thereby reducing the entropy. The structure of the presupernova star is altered both by the changes in $Y_{e}$ and the entropy of the core material. Here we present the microscopic calculation of Gamow-Teller strength distributions for isotopes of iron. The calculation is also compared with other theoretical models and experimental data. Presented also are stellar electron capture rates and associated neutrino cooling rates, due to isotopes of iron, in a form suitable for simulation and modeling codes. It is hoped that the nuclear inputs presented here should ...
Parameter estimation and error analysis in environmental modeling and computation
Kalmaz, E. E.
1986-01-01
A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.
Rock thermal conductivity as key parameter for geothermal numerical models
Di Sipio, Eloisa; Chiesa, Sergio; Destro, Elisa; Galgaro, Antonio; Giaretta, Aurelio; Gola, Gianluca; Manzella, Adele
2013-04-01
The geothermal energy applications are undergoing a rapid development. However, there are still several challenges in the successful exploitation of geothermal energy resources. In particular, a special effort is required to characterize the thermal properties of the ground along with the implementation of efficient thermal energy transfer technologies. This paper focuses on understanding the quantitative contribution that geosciences can receive from the characterization of rock thermal conductivity. The thermal conductivity of materials is one of the main input parameters in geothermal modeling since it directly controls the steady state temperature field. An evaluation of this thermal property is required in several fields, such as Thermo-Hydro-Mechanical multiphysics analysis of frozen soils, designing ground source heat pumps plant, modeling the deep geothermal reservoirs structure, assessing the geothermal potential of subsoil. Aim of this study is to provide original rock thermal conductivity values useful for the evaluation of both low and high enthalpy resources at regional or local scale. To overcome the existing lack of thermal conductivity data of sedimentary, igneous and metamorphic rocks, a series of laboratory measurements has been performed on several samples, collected in outcrop, representative of the main lithologies of the regions included in the VIGOR Project (southern Italy). Thermal properties tests were carried out both in dry and wet conditions, using a C-Therm TCi device, operating following the Modified Transient Plane Source method.Measurements were made at standard laboratory conditions on samples both water saturated and dehydrated with a fan-forced drying oven at 70 ° C for 24 hr, for preserving the mineral assemblage and preventing the change of effective porosity. Subsequently, the samples have been stored in an air-conditioned room while bulk density, solid volume and porosity were detected. The measured thermal conductivity
Directory of Open Access Journals (Sweden)
Jonathan R Karr
2015-05-01
Full Text Available Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.
Parameter estimation of hydrologic models using data assimilation
Kaheil, Y. H.
2005-12-01
The uncertainties associated with the modeling of hydrologic systems sometimes demand that data should be incorporated in an on-line fashion in order to understand the behavior of the system. This paper represents a Bayesian strategy to estimate parameters for hydrologic models in an iterative mode. The paper presents a modified technique called localized Bayesian recursive estimation (LoBaRE) that efficiently identifies the optimum parameter region, avoiding convergence to a single best parameter set. The LoBaRE methodology is tested for parameter estimation for two different types of models: a support vector machine (SVM) model for predicting soil moisture, and the Sacramento Soil Moisture Accounting (SAC-SMA) model for estimating streamflow. The SAC-SMA model has 13 parameters that must be determined. The SVM model has three parameters. Bayesian inference is used to estimate the best parameter set in an iterative fashion. This is done by narrowing the sampling space by imposing uncertainty bounds on the posterior best parameter set and/or updating the "parent" bounds based on their fitness. The new approach results in fast convergence towards the optimal parameter set using minimum training/calibration data and evaluation of fewer parameter sets. The efficacy of the localized methodology is also compared with the previously used Bayesian recursive estimation (BaRE) algorithm.
Parameter estimation for whole-body kinetic model of FDG metabolism
Institute of Scientific and Technical Information of China (English)
CUI Yunfeng; BAI Jing; CHEN Yingmao; TIAN Jiahe
2006-01-01
Based on the radioactive tracer [18F]2-fluoro-2-deoxy-D-glucose (FDG), positron emission tomography (PET), and compartment model, the tracer kinetic study has become an important method to investigate the glucose metabolic kinetics in human body.In this work, the kinetic parameters of three-compartment and four-parameter model for the FDG metabolism in the tissues of myocardium, lung, liver, stomach, spleen, pancreas, and marrow were estimated through some dynamic FDG-PET experiments. Together with published brain and skeletal muscle parameters, a relatively complete whole-body model was presented. In the liver model, the dual blood supply from the hepatic artery and the portal vein to the liver was considered for parameter estimation, and the more accurate results were obtained using the dual-input rather than the single arterial-input. The established whole-body model provides the functional information of FDG metabolism in human body. It can be used to further investigate the glucose metabolism, and also be used for the simulation and visualization of FDG metabolic process in human body.
GIS-Based Hydrogeological-Parameter Modeling
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
A regression model is proposed to relate the variation of water well depth with topographic properties (area and slope), the variation of hydraulic conductivity and vertical decay factor. The implementation of this model in GIS environment (ARC/TNFO) based on known water data and DEM is used to estimate the variation of hydraulic conductivity and decay factor of different lithoiogy units in watershed context.
Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model
DEFF Research Database (Denmark)
Åberg, Andreas; Widd, Anders; Abildskov, Jens;
2016-01-01
A challenge during the development of models for simulation of the automotive Selective Catalytic Reduction catalyst is the parameter estimation of the kinetic parameters, which can be time consuming and problematic. The parameter estimation is often carried out on small-scale reactor tests, or p...
Mirror symmetry for two parameter models, 2
Candelas, Philip; Katz, S; Morrison, Douglas Robert Ogston; Philip Candelas; Anamaria Font; Sheldon Katz; David R Morrison
1994-01-01
We describe in detail the space of the two K\\"ahler parameters of the Calabi--Yau manifold \\P_4^{(1,1,1,6,9)}[18] by exploiting mirror symmetry. The large complex structure limit of the mirror, which corresponds to the classical large radius limit, is found by studying the monodromy of the periods about the discriminant locus, the boundary of the moduli space corresponding to singular Calabi--Yau manifolds. A symplectic basis of periods is found and the action of the Sp(6,\\Z) generators of the modular group is determined. From the mirror map we compute the instanton expansion of the Yukawa couplings and the generalized N=2 index, arriving at the numbers of instantons of genus zero and genus one of each degree. We also investigate an SL(2,\\Z) symmetry that acts on a boundary of the moduli space.
Accuracy of Parameter Estimation in Gibbs Sampling under the Two-Parameter Logistic Model.
Kim, Seock-Ho; Cohen, Allan S.
The accuracy of Gibbs sampling, a Markov chain Monte Carlo procedure, was considered for estimation of item and ability parameters under the two-parameter logistic model. Memory test data were analyzed to illustrate the Gibbs sampling procedure. Simulated data sets were analyzed using Gibbs sampling and the marginal Bayesian method. The marginal…
Predicting musically induced emotions from physiological inputs: Linear and neural network models
Directory of Open Access Journals (Sweden)
Frank A. Russo
2013-08-01
Full Text Available Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of 'felt' emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants – heart rate, respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a nonlinear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The nonlinear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the nonlinear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.
Fuzzy Stochastic Petri Nets for Modeling Biological Systems with Uncertain Kinetic Parameters.
Liu, Fei; Heiner, Monika; Yang, Ming
2016-01-01
Stochastic Petri nets (SPNs) have been widely used to model randomness which is an inherent feature of biological systems. However, for many biological systems, some kinetic parameters may be uncertain due to incomplete, vague or missing kinetic data (often called fuzzy uncertainty), or naturally vary, e.g., between different individuals, experimental conditions, etc. (often called variability), which has prevented a wider application of SPNs that require accurate parameters. Considering the strength of fuzzy sets to deal with uncertain information, we apply a specific type of stochastic Petri nets, fuzzy stochastic Petri nets (FSPNs), to model and analyze biological systems with uncertain kinetic parameters. FSPNs combine SPNs and fuzzy sets, thereby taking into account both randomness and fuzziness of biological systems. For a biological system, SPNs model the randomness, while fuzzy sets model kinetic parameters with fuzzy uncertainty or variability by associating each parameter with a fuzzy number instead of a crisp real value. We introduce a simulation-based analysis method for FSPNs to explore the uncertainties of outputs resulting from the uncertainties associated with input parameters, which works equally well for bounded and unbounded models. We illustrate our approach using a yeast polarization model having an infinite state space, which shows the appropriateness of FSPNs in combination with simulation-based analysis for modeling and analyzing biological systems with uncertain information.
Directory of Open Access Journals (Sweden)
Hongshan Zhao
2012-05-01
Full Text Available Short-term solar irradiance forecasting (STSIF is of great significance for the optimal operation and power predication of grid-connected photovoltaic (PV plants. However, STSIF is very complex to handle due to the random and nonlinear characteristics of solar irradiance under changeable weather conditions. Artificial Neural Network (ANN is suitable for STSIF modeling and many research works on this topic are presented, but the conciseness and robustness of the existing models still need to be improved. After discussing the relation between weather variations and irradiance, the characteristics of the statistical feature parameters of irradiance under different weather conditions are figured out. A novel ANN model using statistical feature parameters (ANN-SFP for STSIF is proposed in this paper. The input vector is reconstructed with several statistical feature parameters of irradiance and ambient temperature. Thus sufficient information can be effectively extracted from relatively few inputs and the model complexity is reduced. The model structure is determined by cross-validation (CV, and the Levenberg-Marquardt algorithm (LMA is used for the network training. Simulations are carried out to validate and compare the proposed model with the conventional ANN model using historical data series (ANN-HDS, and the results indicated that the forecast accuracy is obviously improved under variable weather conditions.
Hodge, B.; Orwig, K.; McCaa, J. R.; Harrold, S.; Draxl, C.; Jones, W.; Searight, K.; Getman, D.
2013-12-01
Regional wind integration studies in the United States, such as the Western Wind and Solar Integration Study (WWSIS), Eastern Wind Integration and Transmission Study (EWITS), and Eastern Renewable Generation Integration Study (ERGIS), perform detailed simulations of the power system to determine the impact of high wind and solar energy penetrations on power systems operations. Some of the specific aspects examined include: infrastructure requirements, impacts on grid operations and conventional generators, ancillary service requirements, as well as the benefits of geographic diversity and forecasting. These studies require geographically broad and temporally consistent wind and solar power production input datasets that realistically reflect the ramping characteristics, spatial and temporal correlations, and capacity factors of wind and solar power plant production, and are time-synchronous with load profiles. The original western and eastern wind datasets were generated independently for 2004-2006 using numerical weather prediction (NWP) models run on a ~2 km grid with 10-minute resolution. Each utilized its own site selection process to augment existing wind plants with simulated sites of high development potential. The original dataset also included day-ahead simulated forecasts. These datasets were the first of their kind and many lessons were learned from their development. For example, the modeling approach used generated periodic false ramps that later had to be removed due to unrealistic impacts on ancillary service requirements. For several years, stakeholders have been requesting an updated dataset that: 1) covers more recent years; 2) spans four or more years to better evaluate interannual variability; 3) uses improved methods to minimize false ramps and spatial seams; 4) better incorporates solar power production inputs; and 5) is more easily accessible. To address these needs, the U.S. Department of Energy (DOE) Wind and Solar Programs have funded two
Rafieeinasab, Arezoo; Norouzi, Amir; Kim, Sunghee; Habibi, Hamideh; Nazari, Behzad; Seo, Dong-Jun; Lee, Haksu; Cosgrove, Brian; Cui, Zhengtao
2015-12-01
Urban flash flooding is a serious problem in large, highly populated areas such as the Dallas-Fort Worth Metroplex (DFW). Being able to monitor and predict flash flooding at a high spatiotemporal resolution is critical to providing location-specific early warnings and cost-effective emergency management in such areas. Under the idealized conditions of perfect models and precipitation input, one may expect that spatiotemporal specificity and accuracy of the model output improve as the resolution of the models and precipitation input increases. In reality, however, due to the errors in the precipitation input, and in the structures, parameters and states of the models, there are practical limits to the model resolution. In this work, we assess the sensitivity of streamflow simulation in urban catchments to the spatiotemporal resolution of precipitation input and hydrologic modeling to identify the resolution at which the simulation errors may be at minimum given the quality of the precipitation input and hydrologic models used, and the response time of the catchment. The hydrologic modeling system used in this work is the National Weather Service (NWS) Hydrology Laboratory's Research Distributed Hydrologic Model (HLRDHM) applied at spatiotemporal resolutions ranging from 250 m to 2 km and from 1 min to 1 h applied over the Cities of Fort Worth, Arlington and Grand Prairie in DFW. The high-resolution precipitation input is from the DFW Demonstration Network of the Collaborative Adaptive Sensing of the Atmosphere (CASA) radars. For comparison, the NWS Multisensor Precipitation Estimator (MPE) product, which is available at a 4-km 1-h resolution, was also used. The streamflow simulation results are evaluated for 5 urban catchments ranging in size from 3.4 to 54.6 km2 and from about 45 min to 3 h in time-to-peak in the Cities of Fort Worth, Arlington and Grand Prairie. The streamflow observations used in evaluation were obtained from water level measurements via rating
CHAMP: Changepoint Detection Using Approximate Model Parameters
2014-06-01
positions as a Markov chain in which the transition probabilities are defined by the time since the last changepoint: p(τi+1 = t|τi = s) = g(t− s), (1...experimentally verified using artifi- cially generated data and are compared to those of Fearnhead and Liu [5]. 2 Related work Hidden Markov Models (HMMs) are...length α, and maximum number of particles M . Output: Viterbi path of changepoint times and models // Initialize data structures 1: max path, prev queue
Linear regression models of floor surface parameters on friction between Neolite and quarry tiles.
Chang, Wen-Ruey; Matz, Simon; Grönqvist, Raoul; Hirvonen, Mikko
2010-01-01
For slips and falls, friction is widely used as an indicator of surface slipperiness. Surface parameters, including surface roughness and waviness, were shown to influence friction by correlating individual surface parameters with the measured friction. A collective input from multiple surface parameters as a predictor of friction, however, could provide a broader perspective on the contributions from all the surface parameters evaluated. The objective of this study was to develop regression models between the surface parameters and measured friction. The dynamic friction was measured using three different mixtures of glycerol and water as contaminants. Various surface roughness and waviness parameters were measured using three different cut-off lengths. The regression models indicate that the selected surface parameters can predict the measured friction coefficient reliably in most of the glycerol concentrations and cut-off lengths evaluated. The results of the regression models were, in general, consistent with those obtained from the correlation between individual surface parameters and the measured friction in eight out of nine conditions evaluated in this experiment. A hierarchical regression model was further developed to evaluate the cumulative contributions of the surface parameters in the final iteration by adding these parameters to the regression model one at a time from the easiest to measure to the most difficult to measure and evaluating their impacts on the adjusted R(2) values. For practical purposes, the surface parameter R(a) alone would account for the majority of the measured friction even if it did not reach a statistically significant level in some of the regression models.
Graf, D. L.; Anderson, D. E.
1981-12-01
Hydrological models that treat phenomena occurring deep in sedimentary piles, such as petroleum maturation and retention of chemical and radioactive waste, may require time spans of at least several million years. Many input quantities classically treated as constants will be variables on this time scale. Models sophisticated enough to include transport contributions from such processes as chemical diffusion, mineral dehydration and shale membrane behavior require considerable knowledge about regional geological history as well as the pertinent mineralogical and geochemical relationships. Simple dehydrations such as those of gypsum and halloysite occur at sharply-defined temperatures but, as with all mineral dehydration reactions, the equilibrium temperature is strongly dependent on the pore-fluid salinity and degree of overpressuring encountered in the subsurface. The dehydrations of analcime and smectite proceed by reactions involving other sedimentary minerals. The smectite reaction is crystallographically complex, yielding a succession of mixed-layered illite/smectites, and on the U.S.A. Gulf of Mexico coast continues over several million years at a particular stratigraphic interval.
Modelling Implicit Communication in Multi-Agent Systems with Hybrid Input/Output Automata
Directory of Open Access Journals (Sweden)
Marta Capiluppi
2012-10-01
Full Text Available We propose an extension of Hybrid I/O Automata (HIOAs to model agent systems and their implicit communication through perturbation of the environment, like localization of objects or radio signals diffusion and detection. To this end we decided to specialize some variables of the HIOAs whose values are functions both of time and space. We call them world variables. Basically they are treated similarly to the other variables of HIOAs, but they have the function of representing the interaction of each automaton with the surrounding environment, hence they can be output, input or internal variables. Since these special variables have the role of simulating implicit communication, their dynamics are specified both in time and space, because they model the perturbations induced by the agent to the environment, and the perturbations of the environment as perceived by the agent. Parallel composition of world variables is slightly different from parallel composition of the other variables, since their signals are summed. The theory is illustrated through a simple example of agents systems.
Limited fetch revisited: Comparison of wind input terms, in surface wave modeling
Pushkarev, Andrei; Zakharov, Vladimir
2016-07-01
Results pertaining to numerical solutions of the Hasselmann kinetic equation (HE), for wind driven sea spectra, in the fetch limited geometry, are presented. Five versions of source functions, including the recently introduced ZRP model (Zakharov et al., 2012), have been studied, for the exact expression of Snl and high-frequency implicit dissipation, due to wave-breaking. Four of the five experiments were done in the absence of spectral peak dissipation for various Sin terms. They demonstrated the dominance of quadruplet wave-wave interaction, in the energy balance, and the formation of self-similar regimes, of unlimited wave energy growth, along the fetch. Between them was the ZRP model, which strongly agreed with dozens of field observations performed in the seas and lakes, since 1947. The fifth, the WAM3 wind input term experiment, used additional spectral peak dissipation and reproduced the results of a previous, similar, numerical simulation described in Komen et al. (1994), but only supported the field experiments for moderate fetches, demonstrating a total energy saturation at half of that of the Pierson-Moscowits limit. The alternative framework for HE numerical simulation is proposed, along with a set of tests, allowing one to select physically-justified source terms.
El Haimar, Amine; Santos, Joost R
2014-03-01
Influenza pandemic is a serious disaster that can pose significant disruptions to the workforce and associated economic sectors. This article examines the impact of influenza pandemic on workforce availability within an interdependent set of economic sectors. We introduce a simulation model based on the dynamic input-output model to capture the propagation of pandemic consequences through the National Capital Region (NCR). The analysis conducted in this article is based on the 2009 H1N1 pandemic data. Two metrics were used to assess the impacts of the influenza pandemic on the economic sectors: (i) inoperability, which measures the percentage gap between the as-planned output and the actual output of a sector, and (ii) economic loss, which quantifies the associated monetary value of the degraded output. The inoperability and economic loss metrics generate two different rankings of the critical economic sectors. Results show that most of the critical sectors in terms of inoperability are sectors that are related to hospitals and health-care providers. On the other hand, most of the sectors that are critically ranked in terms of economic loss are sectors with significant total production outputs in the NCR such as federal government agencies. Therefore, policy recommendations relating to potential mitigation and recovery strategies should take into account the balance between the inoperability and economic loss metrics.
A MULTIYEAR LAGS INPUT-HOLDING-OUTPUT MODEL ON EDUCATION WITH EXCLUDING IDLE CAPITAL
Institute of Scientific and Technical Information of China (English)
Xue FU; Xikang CHEN
2009-01-01
This paper develops a multi-year lag Input-Holding-Output (I-H-O) Model on education with exclusion of the idle capital to address the reasonable education structure in support of a sus-tainable development strategy in China. First, the model considers the multiyear lag of human capital because the lag time of human capital is even longer and more important than that of fixed capital. Second, it considers the idle capital resulting from the output decline in education, for example, stu-dent decrease in primary school. The new generalized Leonitief dynamic inverse is deduced to obtain a positive solution on education when output declines as well as expands. After compiling the 2000 I-H-O table on education, the authors adopt modifications-by-step method to treat nonlinear coefficients, and calculate education scale, the requirement of human capital, and education expenditure from 2005 to 2020. It is found that structural imbalance of human capital is a serious problem for Chinese economic development.
WINKLER'S SINGLE-PARAMETER SUBGRADE MODEL FROM ...
African Journals Online (AJOL)
Preferred Customer
[3, 9]. However, mainly due to the simplicity of Winkler's model in practical applications and .... this case, the coefficient B takes the dimension of a ... In plane-strain problems, the assumption of ... loaded circular region; s is the radial coordinate.
"Updates to Model Algorithms & Inputs for the Biogenic Emissions Inventory System (BEIS) Model"
We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observatio...
Energy Technology Data Exchange (ETDEWEB)
Jannik, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Stagich, B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2017-05-25
Operations at the Savannah River Site (SRS) result in releases of relatively small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) regulatory guides. Within the regulatory guides, default values are provided for many of the dose model parameters, but the use of site-specific values is encouraged. Detailed surveys of land-use and water-use parameters were conducted in 1991, 2008, 2010, and 2016 and are being concurred with or updated in this report. These parameters include local characteristics of meat, milk, and vegetable production; river recreational activities; and meat, milk, and vegetable consumption rates, as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors (to be used in human health exposure calculations at SRS) are documented. The intent of this report is to establish a standardized source for these parameters that is up to date with existing data, and that is maintained via review of future-issued national references (to evaluate the need for changes as new information is released). These reviews will continue to be added to this document by revision.
Energy Technology Data Exchange (ETDEWEB)
Jannik, G. Tim [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hartman, Larry [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Stagich, Brooke [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-09-26
Operations at the Savannah River Site (SRS) result in releases of small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) regulatory guides. Within the regulatory guides, default values are provided for many of the dose model parameters, but the use of applicant site-specific values is encouraged. Detailed surveys of land-use and water-use parameters were conducted in 1991 and 2010. They are being updated in this report. These parameters include local characteristics of meat, milk and vegetable production; river recreational activities; and meat, milk and vegetable consumption rates, as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors (to be used in human health exposure calculations at SRS) are documented. The intent of this report is to establish a standardized source for these parameters that is up to date with existing data, and that is maintained via review of future-issued national references (to evaluate the need for changes as new information is released). These reviews will continue to be added to this document by revision.
Schattan, Paul; Bellinger, Johannes; Förster, Kristian; Schöber, Johannes; Huttenlau, Matthias; Kirnbauer, Robert; Achleitner, Stefan
2017-04-01
Modelling water resources in snow-dominated mountainous catchments is challenging due to both, short concentration times and a highly variable contribution of snow melt in space and time from complex terrain. A number of model setups exist ranging from physically based models to conceptional models which do not attempt to represent the natural processes in a physically meaningful way. Within the flood forecasting system for the Tyrolean Inn River two serially linked hydrological models with differing process representation are used. Non- glacierized catchments are modelled by a semi-distributed, water balance model (HQsim) based on the HRU-approach. A fully-distributed energy and mass balance model (SES), purpose-built for snow- and icemelt, is used for highly glacierized headwater catchments. Previous work revealed uncertainties and limitations within the models' structures regarding (i) the representation of snow processes in HQsim, (ii) the runoff routing of SES, and (iii) the spatial resolution of the meteorological input data in both models. To overcome these limitations, a "strengths driven" model coupling is applied. Instead of linking the models serially, a vertical one-way coupling of models has been implemented. The fully-distributed snow modelling of SES is combined with the semi-distributed HQsim structure, allowing to benefit from soil and runoff routing schemes in HQsim. A monte-carlo based modelling experiment was set up to evaluate the resulting differences in the runoff prediction due to the improved model coupling and a refined spatial resolution of the meteorological forcing. The experiment design follows a gradient of spatial discretisation of hydrological processes and meteorological forcing data with a total of six different model setups for the alpine headwater basin of the Fagge River in the Tyrolean Alps. In general, all setups show a good performance for this particular basin. It is therefore planned to include other basins with differing
Dynamic Modeling of a Roller Chain Drive System Considering the Flexibility of Input Shaft
Institute of Scientific and Technical Information of China (English)
XU Lixin; YANG Yuhu; CHANG Zongyu; LIU Jianping
2010-01-01
Roller chain drives are widely used in various high-speed, high-load and power transmission applications, but their complex dynamic behavior is not well researched. Most studies were only focused on the analysis of the vibration of chain tight span, and in these models, many factors are neglected. In this paper, a mathematical model is developed to calculate the dynamic response of a roller chain drive working at constant or variable speed condition. In the model, the complete chain transmission with two sprockets and the necessary tight and slack spans is used. The effect of the flexibility of input shaft on dynamic response of the chain system is taken into account, as well as the elastic deformation in the chain, the inertial forces, the gravity and the torque on driven shaft. The nonlinear equations of movement are derived from using Lagrange equations and solved numerically. Given the center distance and the two initial position angles of teeth on driving and driven sprockets corresponding to the first seating roller on each side of the tight span, dynamics of any roller chain drive with two sprockets and two spans can be analyzed by the procedure. Finally, a numerical example is given and the validity of the procedure developed is demonstrated by analyzing the dynamic behavior of a typical roller chain drive. The model can well simulate the transverse and longitudinal vibration of the chain spans and the torsional vibration of the sprockets. This study can provide an effective method for the analysis of the dynamic characteristics of all the chain drive systems.
Improved Methodology for Parameter Inference in Nonlinear, Hydrologic Regression Models
Bates, Bryson C.
1992-01-01
A new method is developed for the construction of reliable marginal confidence intervals and joint confidence regions for the parameters of nonlinear, hydrologic regression models. A parameter power transformation is combined with measures of the asymptotic bias and asymptotic skewness of maximum likelihood estimators to determine the transformation constants which cause the bias or skewness to vanish. These optimized constants are used to construct confidence intervals and regions for the transformed model parameters using linear regression theory. The resulting confidence intervals and regions can be easily mapped into the original parameter space to give close approximations to likelihood method confidence intervals and regions for the model parameters. Unlike many other approaches to parameter transformation, the procedure does not use a grid search to find the optimal transformation constants. An example involving the fitting of the Michaelis-Menten model to velocity-discharge data from an Australian gauging station is used to illustrate the usefulness of the methodology.
A simulation of water pollution model parameter estimation
Kibler, J. F.
1976-01-01
A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.
On retrial queueing model with fuzzy parameters
Ke, Jau-Chuan; Huang, Hsin-I.; Lin, Chuen-Horng
2007-01-01
This work constructs the membership functions of the system characteristics of a retrial queueing model with fuzzy customer arrival, retrial and service rates. The α-cut approach is used to transform a fuzzy retrial-queue into a family of conventional crisp retrial queues in this context. By means of the membership functions of the system characteristics, a set of parametric non-linear programs is developed to describe the family of crisp retrial queues. A numerical example is solved successfully to illustrate the validity of the proposed approach. Because the system characteristics are expressed and governed by the membership functions, more information is provided for use by management. By extending this model to the fuzzy environment, fuzzy retrial-queue is represented more accurately and analytic results are more useful for system designers and practitioners.
Solar parameters for modeling interplanetary background
Bzowski, M; Tokumaru, M; Fujiki, K; Quemerais, E; Lallement, R; Ferron, S; Bochsler, P; McComas, D J
2011-01-01
The goal of the Fully Online Datacenter of Ultraviolet Emissions (FONDUE) Working Team of the International Space Science Institute in Bern, Switzerland, was to establish a common calibration of various UV and EUV heliospheric observations, both spectroscopic and photometric. Realization of this goal required an up-to-date model of spatial distribution of neutral interstellar hydrogen in the heliosphere, and to that end, a credible model of the radiation pressure and ionization processes was needed. This chapter describes the solar factors shaping the distribution of neutral interstellar H in the heliosphere. Presented are the solar Lyman-alpha flux and the solar Lyman-alpha resonant radiation pressure force acting on neutral H atoms in the heliosphere, solar EUV radiation and the photoionization of heliospheric hydrogen, and their evolution in time and the still hypothetical variation with heliolatitude. Further, solar wind and its evolution with solar activity is presented in the context of the charge excha...
Linear Sigma Models With Strongly Coupled Phases -- One Parameter Models
Hori, Kentaro
2013-01-01
We systematically construct a class of two-dimensional $(2,2)$ supersymmetric gauged linear sigma models with phases in which a continuous subgroup of the gauge group is totally unbroken. We study some of their properties by employing a recently developed technique. The focus of the present work is on models with one K\\"ahler parameter. The models include those corresponding to Calabi-Yau threefolds, extending three examples found earlier by a few more, as well as Calabi-Yau manifolds of other dimensions and non-Calabi-Yau manifolds. The construction leads to predictions of equivalences of D-brane categories, systematically extending earlier examples. There is another type of surprise. Two distinct superconformal field theories corresponding to Calabi-Yau threefolds with different Hodge numbers, $h^{2,1}=23$ versus $h^{2,1}=59$, have exactly the same quantum K\\"ahler moduli space. The strong-weak duality plays a crucial r\\^ole in confirming this, and also is useful in the actual computation of the metric on t...
Energy Technology Data Exchange (ETDEWEB)
Morrison, J.L.
1992-12-01
The objective of this research is to develop a simple, yet accurate, lumped parameter mathematical model for an explosively driven magnetohydrodynamic generator that can predict the pulse power variables of voltage and current from startup through regenerative operation. The inputs to the model will be the plasma properties entering the generator as predicted by the explosive shock model of Reference [1]. The strategy used was to simplify electromagnetic and thermodynamic three dimensional effects into a zero dimensional model. The model will provide a convenient tool for researchers to optimize designs to be used in pulse power applications. The model is validated using experimental data of Reference [1]. An overview of the operation of the explosively driven generator is first presented. Then a simplified electrical circuit model that describes basic performance of the device is developed. Then a lumped parameter model that incorporates the coupled electromagnetic and thermodynamic effects that govern generator performance is described and developed. The model is based on fundamental physical principles and parameters that were either obtained directly from design data or estimated from experimental data. The model was used to obtain parameter sensitivities and predict beyond the limits observed in the experiments to the levels desired by the potential Department of Defense sponsors. The model identifies process limitations that provide direction for future research.
A pre-calibration approach to select optimum inputs for hydrological models in data-scarce regions
Tarawneh, Esraa; Bridge, Jonathan; Macdonald, Neil
2016-10-01
This study uses the Soil and Water Assessment Tool (SWAT) model to quantitatively compare available input datasets in a data-poor dryland environment (Wala catchment, Jordan; 1743 km2). Eighteen scenarios combining best available land-use, soil and weather datasets (1979-2002) are considered to construct SWAT models. Data include local observations and global reanalysis data products. Uncalibrated model outputs assess the variability in model performance derived from input data sources only. Model performance against discharge and sediment load data are compared using r2, Nash-Sutcliffe efficiency (NSE), root mean square error standard deviation ratio (RSR) and percent bias (PBIAS). NSE statistic varies from 0.56 to -12 and 0.79 to -85 for best- and poorest-performing scenarios against observed discharge and sediment data respectively. Global weather inputs yield considerable improvements on discontinuous local datasets, whilst local soil inputs perform considerably better than global-scale mapping. The methodology provides a rapid, transparent and transferable approach to aid selection of the most robust suite of input data.
Exploring the interdependencies between parameters in a material model.
Energy Technology Data Exchange (ETDEWEB)
Silling, Stewart Andrew; Fermen-Coker, Muge
2014-01-01
A method is investigated to reduce the number of numerical parameters in a material model for a solid. The basis of the method is to detect interdependencies between parameters within a class of materials of interest. The method is demonstrated for a set of material property data for iron and steel using the Johnson-Cook plasticity model.
An Alternative Three-Parameter Logistic Item Response Model.
Pashley, Peter J.
Birnbaum's three-parameter logistic function has become a common basis for item response theory modeling, especially within situations where significant guessing behavior is evident. This model is formed through a linear transformation of the two-parameter logistic function in order to facilitate a lower asymptote. This paper discusses an…
A compact cyclic plasticity model with parameter evolution
DEFF Research Database (Denmark)
Krenk, Steen; Tidemann, L.
2017-01-01
, and it is demonstrated that this simple formulation enables very accurate representation of experimental results. An extension of the theory to account for model parameter evolution effects, e.g. in the form of changing yield level, is included in the form of extended evolution equations for the model parameters...
Identification of parameters in nonlinear geotechnical models using extenden Kalman filter
Directory of Open Access Journals (Sweden)
Nestorović Tamara
2014-01-01
Full Text Available Direct measurement of relevant system parameters often represents a problem due to different limitations. In geomechanics, measurement of geotechnical material constants which constitute a material model is usually a very diffcult task even with modern test equipment. Back-analysis has proved to be a more effcient and more economic method for identifying material constants because it needs measurement data such as settlements, pore pressures, etc., which are directly measurable, as inputs. Among many model parameter identification methods, the Kalman filter method has been applied very effectively in recent years. In this paper, the extended Kalman filter – local iteration procedure incorporated with finite element analysis (FEA software has been implemented. In order to prove the effciency of the method, parameter identification has been performed for a nonlinear geotechnical model.
Utilizing Soize's Approach to Identify Parameter and Model Uncertainties
Energy Technology Data Exchange (ETDEWEB)
Bonney, Matthew S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Univ. of Wisconsin, Madison, WI (United States); Brake, Matthew Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2014-10-01
Quantifying uncertainty in model parameters is a challenging task for analysts. Soize has derived a method that is able to characterize both model and parameter uncertainty independently. This method is explained with the assumption that some experimental data is available, and is divided into seven steps. Monte Carlo analyses are performed to select the optimal dispersion variable to match the experimental data. Along with the nominal approach, an alternative distribution can be used along with corrections that can be utilized to expand the scope of this method. This method is one of a very few methods that can quantify uncertainty in the model form independently of the input parameters. Two examples are provided to illustrate the methodology, and example code is provided in the Appendix.
NWP model forecast skill optimization via closure parameter variations
Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.
2012-04-01
We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.
Milzow, C.; Kgotlhang, L.; Kinzelbach, W.; Bauer-Gottwein, P.
2006-12-01
medium-term. The Delta's size and limited accessibility make direct data acquisition on the ground difficult. Remote sensing methods are the most promising source of acquiring spatially distributed data for both, model input and calibration. Besides ground data, METEOSAT and NOAA data are used for precipitation and evapotranspiration inputs respectively. The topography is taken from a study from Gumbricht et al. (2004) where the SRTM shuttle mission data is refined using remotely sensed vegetation indexes. The aquifer thickness was determined with an aeromagnetic survey. For calibration, the simulated flooding patterns are compared to patterns derived from satellite imagery: recent ENVISAT ASAR and older NOAA AVHRR scenes. The final objective is to better understand the hydrological and hydraulic aspects of this complex ecosystem and eventually predict the consequences of human interventions. It will provide a tool for decision makers involved to assess the impact of possible upstream dams and water abstraction scenarios.
Directory of Open Access Journals (Sweden)
S. C. van Pelt
2009-12-01
Full Text Available Studies have demonstrated that precipitation on Northern Hemisphere mid-latitudes has increased in the last decades and that it is likely that this trend will continue. This will have an influence on discharge of the river Meuse. The use of bias correction methods is important when the effect of precipitation change on river discharge is studied. The objective of this paper is to investigate the effect of using two different bias correction methods on output from a Regional Climate Model (RCM simulation. In this study a Regional Atmospheric Climate Model (RACMO2 run is used, forced by ECHAM5/MPIOM under the condition of the SRES-A1B emission scenario, with a 25 km horizontal resolution. The RACMO2 runs contain a systematic precipitation bias on which two bias correction methods are applied. The first method corrects for the wet day fraction and wet day average (WD bias correction and the second method corrects for the mean and coefficient of variance (MV bias correction. The WD bias correction initially corrects well for the average, but it appears that too many successive precipitation days were removed with this correction. The second method performed less well on average bias correction, but the temporal precipitation pattern was better. Subsequently, the discharge was calculated by using RACMO2 output as forcing to the HBV-96 hydrological model. A large difference was found between the simulated discharge of the uncorrected RACMO2 run, the WD bias corrected run and the MV bias corrected run. These results show the importance of an appropriate bias correction.
Bayesian estimation of parameters in a regional hydrological model
Directory of Open Access Journals (Sweden)
K. Engeland
2002-01-01
Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis
Optimization modeling of U.S. renewable electricity deployment using local input variables
Bernstein, Adam
For the past five years, state Renewable Portfolio Standard (RPS) laws have been a primary driver of renewable electricity (RE) deployments in the United States. However, four key trends currently developing: (i) lower natural gas prices, (ii) slower growth in electricity demand, (iii) challenges of system balancing intermittent RE within the U.S. transmission regions, and (iv) fewer economical sites for RE development, may limit the efficacy of RPS laws over the remainder of the current RPS statutes' lifetime. An outsized proportion of U.S. RE build occurs in a small number of favorable locations, increasing the effects of these variables on marginal RE capacity additions. A state-by-state analysis is necessary to study the U.S. electric sector and to generate technology specific generation forecasts. We used LP optimization modeling similar to the National Renewable Energy Laboratory (NREL) Renewable Energy Development System (ReEDS) to forecast RE deployment across the 8 U.S. states with the largest electricity load, and found state-level RE projections to Year 2031 significantly lower than thoseimplied in the Energy Information Administration (EIA) 2013 Annual Energy Outlook forecast. Additionally, the majority of states do not achieve their RPS targets in our forecast. Combined with the tendency of prior research and RE forecasts to focus on larger national and global scale models, we posit that further bottom-up state and local analysis is needed for more accurate policy assessment, forecasting, and ongoing revision of variables as parameter values evolve through time. Current optimization software eliminates much of the need for algorithm coding and programming, allowing for rapid model construction and updating across many customized state and local RE parameters. Further, our results can be tested against the empirical outcomes that will be observed over the coming years, and the forecast deviation from the actuals can be attributed to discrete parameter
Directory of Open Access Journals (Sweden)
G.Sankara Narayanan
2014-03-01
Full Text Available Unconventional machining process finds lot of application in aerospace and precision industries. It is preferred over other conventional methods because of the advent of composite and high strength to weight ratio materials, complex parts and also because of its high accuracy and precision. Usually in unconventional machine tools, trial and error method is used to fix the values of process parameters which increase the production time and material wastage. A mathematical model functionally relating process parameters and operating parameters of a wire cut electric discharge machine (WEDM is developed incorporating Artificial neural network (ANN and the work piece material is SKD11 tool steel. This is accomplished by training a feed forward neural network with back propagation learning Levenberg-Marquardt algorithm. The required data used for training and testing the ANN are obtained by conducting trial runs in wire cut electric discharge machine in a small scale industry from South India. The programs for training and testing the neural network are developed, using matlab 7.0.1 package. In this work, we have considered the parameters such as thickness, time and wear as the input values and from that the values of the process parameters are related and a algorithm is arrived. Hence, the proposed algorithm reduces the time taken by trial runs to set the input process parameters of WEDM and thus reduces the production time along with reduction in material wastage. Thus the cost of machining processes is reduced and thereby increases the overall productivity.
Identification of a Manipulator Model Using the Input Error Method in the Mathematica Program
Directory of Open Access Journals (Sweden)
Leszek CEDRO
2009-06-01
Full Text Available The problem of parameter identification for a four-degree-of-freedom robot was solved using the Mathematica program. The identification was performed by means of specially developed differential filters [1]. Using the example of a manipulator, we analyze the capabilities of the Mathematica program that can be applied to solve problems related to the modeling, control, simulation and identification of a system [2]. The responses of the identification process for the variables and the values of the quality function are included.
Usefulness of non-linear input-output models for economic impact analyses in tourism and recreation
Klijs, J.; Peerlings, J.H.M.; Heijman, W.J.M.
2015-01-01
In tourism and recreation management it is still common practice to apply traditional input–output (IO) economic impact models, despite their well-known limitations. In this study the authors analyse the usefulness of applying a non-linear input–output (NLIO) model, in which price-induced input subs
The Use of an Eight-Step Instructional Model to Train School Staff in Partner-Augmented Input
Senner, Jill E.; Baud, Matthew R.
2017-01-01
An eight-step instruction model was used to train a self-contained classroom teacher, speech-language pathologist, and two instructional assistants in partner-augmented input, a modeling strategy for teaching augmentative and alternative communication use. With the exception of a 2-hr training session, instruction primarily was conducted during…
Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan
2016-04-01
Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the
A Bayesian Approach for Parameter Estimation and Prediction using a Computationally Intensive Model
Higdon, Dave; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M
2014-01-01
Bayesian methods have been very successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model $\\eta(\\theta)$ where $\\theta$ denotes the uncertain, best input setting. Hence the statistical model is of the form $y = \\eta(\\theta) + \\epsilon$, where $\\epsilon$ accounts for measurement, and possibly other error sources. When non-linearity is present in $\\eta(\\cdot)$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and non-standard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. While quite generally applicable, MCMC requires thousands, or even millions of evaluations of the physics model $\\eta(\\cdot)$. This is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we pr...
Ecological input-output modeling for embodied resources and emissions in Chinese economy 2005
Chen, Z. M.; Chen, G. Q.; Zhou, J. B.; Jiang, M. M.; Chen, B.
2010-07-01
For the embodiment of natural resources and environmental emissions in Chinese economy 2005, a biophysical balance modeling is carried out based on an extension of the economic input-output table into an ecological one integrating the economy with its various environmental driving forces. Included resource flows into the primary resource sectors and environmental emission flows from the primary emission sectors belong to seven categories as energy resources in terms of fossil fuels, hydropower and nuclear energy, biomass, and other sources; freshwater resources; greenhouse gas emissions in terms of CO2, CH4, and N2O; industrial wastes in terms of waste water, waste gas, and waste solid; exergy in terms of fossil fuel resources, biological resources, mineral resources, and environmental resources; solar emergy and cosmic emergy in terms of climate resources, soil, fossil fuels, and minerals. The resulted database for embodiment intensity and sectoral embodiment of natural resources and environmental emissions is of essential implications in context of systems ecology and ecological economics in general and of global climate change in particular.
Institute of Scientific and Technical Information of China (English)
2010-01-01
Agricultural input and output status in southern Xinjiang,China is introduced,such as lack of agricultural input,low level of agricultural modernization,excessive fertilizer use,serious damage of environment,shortage of water resources,tremendous pressure on ecological balance,insignificant economic and social benefits of agricultural production in southern Xinjiang,agriculture remaining a weak industry,agricultural economy as the economic subject of southern Xinjiang,and backward economic development of southern Xinjiang.Taking the Aksu area as an example,according to the input and output data in the years 2002-2007,input-output model about regional agriculture of the southern Xinjiang is established by principal component analysis.DPS software is used in the process of solving the model.Then,Eviews software is adopted to revise and test the model in order to analyze and evaluate the economic significance of the results obtained,and to make additional explanations of the relevant model.Since the agricultural economic output is seriously restricted in southern Xinjiang at present,the following countermeasures are put forward,such as adjusting the structure of agricultural land,improving the utilization ratio of land,increasing agricultural input,realizing agricultural modernization,rationally utilizing water resources,maintaining eco-environmental balance,enhancing the awareness of agricultural insurance,minimizing the risk and loss,taking the road of industrialization of characteristic agricultural products,and realizing the transfer of surplus labor force.
Directory of Open Access Journals (Sweden)
N. Hanasaki
2008-07-01
Full Text Available To assess global water availability and use at a subannual timescale, an integrated global water resources model was developed consisting of six modules: land surface hydrology, river routing, crop growth, reservoir operation, environmental flow requirement estimation, and anthropogenic water withdrawal. The model simulates both natural and anthropogenic water flow globally (excluding Antarctica on a daily basis at a spatial resolution of 1°×1° (longitude and latitude. This first part of the two-feature report describes the six modules and the input meteorological forcing. The input meteorological forcing was provided by the second Global Soil Wetness Project (GSWP2, an international land surface modeling project. Several reported shortcomings of the forcing component were improved. The land surface hydrology module was developed based on a bucket type model that simulates energy and water balance on land surfaces. The crop growth module is a relatively simple model based on concepts of heat unit theory, potential biomass, and a harvest index. In the reservoir operation module, 452 major reservoirs with >1 km^{3} each of storage capacity store and release water according to their own rules of operation. Operating rules were determined for each reservoir by an algorithm that used currently available global data such as reservoir storage capacity, intended purposes, simulated inflow, and water demand in the lower reaches. The environmental flow requirement module was newly developed based on case studies from around the world. Simulated runoff was compared and validated with observation-based global runoff data sets and observed streamflow records at 32 major river gauging stations around the world. Mean annual runoff agreed well with earlier studies at global and continental scales, and in individual basins, the mean bias was less than ±20% in 14 of the 32 river basins and less than ±50% in 24 basins. The error in the peak was less
Hanasaki, N.; Kanae, S.; Oki, T.; Masuda, K.; Motoya, K.; Shirakawa, N.; Shen, Y.; Tanaka, K.
2008-07-01
To assess global water availability and use at a subannual timescale, an integrated global water resources model was developed consisting of six modules: land surface hydrology, river routing, crop growth, reservoir operation, environmental flow requirement estimation, and anthropogenic water withdrawal. The model simulates both natural and anthropogenic water flow globally (excluding Antarctica) on a daily basis at a spatial resolution of 1°×1° (longitude and latitude). This first part of the two-feature report describes the six modules and the input meteorological forcing. The input meteorological forcing was provided by the second Global Soil Wetness Project (GSWP2), an international land surface modeling project. Several reported shortcomings of the forcing component were improved. The land surface hydrology module was developed based on a bucket type model that simulates energy and water balance on land surfaces. The crop growth module is a relatively simple model based on concepts of heat unit theory, potential biomass, and a harvest index. In the reservoir operation module, 452 major reservoirs with >1 km3 each of storage capacity store and release water according to their own rules of operation. Operating rules were determined for each reservoir by an algorithm that used currently available global data such as reservoir storage capacity, intended purposes, simulated inflow, and water demand in the lower reaches. The environmental flow requirement module was newly developed based on case studies from around the world. Simulated runoff was compared and validated with observation-based global runoff data sets and observed streamflow records at 32 major river gauging stations around the world. Mean annual runoff agreed well with earlier studies at global and continental scales, and in individual basins, the mean bias was less than ±20% in 14 of the 32 river basins and less than ±50% in 24 basins. The error in the peak was less than ±1 mo in 19 of the 27
Some tests for parameter constancy in cointegrated VAR-models
DEFF Research Database (Denmark)
Hansen, Henrik; Johansen, Søren
1999-01-01
Some methods for the evaluation of parameter constancy in vector autoregressive (VAR) models are discussed. Two different ways of re-estimating the VAR model are proposed; one in which all parameters are estimated recursively based upon the likelihood function for the first observations, and anot...... be applied to test the constancy of the long-run parameters in the cointegrated VAR-model. All results are illustrated using a model for the term structure of interest rates on US Treasury securities. ...
Spatio-temporal modeling of nonlinear distributed parameter systems
Li, Han-Xiong
2011-01-01
The purpose of this volume is to provide a brief review of the previous work on model reduction and identifi cation of distributed parameter systems (DPS), and develop new spatio-temporal models and their relevant identifi cation approaches. In this book, a systematic overview and classifi cation on the modeling of DPS is presented fi rst, which includes