WorldWideScience

Sample records for model parameter set

  1. Setting Parameters for Biological Models With ANIMO

    Directory of Open Access Journals (Sweden)

    Stefano Schivo

    2014-03-01

    Full Text Available ANIMO (Analysis of Networks with Interactive MOdeling is a software for modeling biological networks, such as e.g. signaling, metabolic or gene networks. An ANIMO model is essentially the sum of a network topology and a number of interaction parameters. The topology describes the interactions between biological entities in form of a graph, while the parameters determine the speed of occurrence of such interactions. When a mismatch is observed between the behavior of an ANIMO model and experimental data, we want to update the model so that it explains the new data. In general, the topology of a model can be expanded with new (known or hypothetical nodes, and enables it to match experimental data. However, the unrestrained addition of new parts to a model causes two problems: models can become too complex too fast, to the point of being intractable, and too many parts marked as "hypothetical" or "not known" make a model unrealistic. Even if changing the topology is normally the easier task, these problems push us to try a better parameter fit as a first step, and resort to modifying the model topology only as a last resource. In this paper we show the support added in ANIMO to ease the task of expanding the knowledge on biological networks, concentrating in particular on the parameter settings.

  2. Models for setting ATM parameter values

    DEFF Research Database (Denmark)

    Blaabjerg, Søren; Gravey, A.; Romæuf, L.

    1996-01-01

    presents approximate methods and discusses their applicability. We then discuss the problem of obtaining traffic characteristic values for a connection that has crossed a series of switching nodes. This problem is particularly relevant for the traffic contract components corresponding to ICIs...... (CDV) tolerance(s). The values taken by these traffic parameters characterize the so-called ''Worst Case Traffic'' that is used by CAC procedures for accepting a new connection and allocating resources to it. Conformance to the negotiated traffic characteristics is defined, at the ingress User...... essential to set traffic characteristic values that are relevant to the considered cell stream, and that ensure that the amount of non-conforming traffic is small. Using a queueing model representation for the GCRA formalism, several methods are available for choosing the traffic characteristics. This paper...

  3. Setting Parameters for Biological Models With ANIMO

    NARCIS (Netherlands)

    Schivo, Stefano; Scholma, Jetse; Karperien, Hermanus Bernardus Johannes; Post, Janine Nicole; van de Pol, Jan Cornelis; Langerak, Romanus; André, Étienne; Frehse, Goran

    2014-01-01

    ANIMO (Analysis of Networks with Interactive MOdeling) is a software for modeling biological networks, such as e.g. signaling, metabolic or gene networks. An ANIMO model is essentially the sum of a network topology and a number of interaction parameters. The topology describes the interactions

  4. Physical property parameter set for modeling ICPP aqueous wastes with ASPEN electrolyte NRTL model

    International Nuclear Information System (INIS)

    Schindler, R.E.

    1996-09-01

    The aqueous waste evaporators at the Idaho Chemical Processing Plant (ICPP) are being modeled using ASPEN software. The ASPEN software calculates chemical and vapor-liquid equilibria with activity coefficients calculated using the electrolyte Non-Random Two Liquid (NRTL) model for local excess Gibbs free energies of interactions between ions and molecules in solution. The use of the electrolyte NRTL model requires the determination of empirical parameters for the excess Gibbs free energies of the interactions between species in solution. This report covers the development of a set parameters, from literature data, for the use of the electrolyte NRTL model with the major solutes in the ICPP aqueous wastes

  5. A novel level set model with automated initialization and controlling parameters for medical image segmentation.

    Science.gov (United States)

    Liu, Qingyi; Jiang, Mingyan; Bai, Peirui; Yang, Guang

    2016-03-01

    In this paper, a level set model without the need of generating initial contour and setting controlling parameters manually is proposed for medical image segmentation. The contribution of this paper is mainly manifested in three points. First, we propose a novel adaptive mean shift clustering method based on global image information to guide the evolution of level set. By simple threshold processing, the results of mean shift clustering can automatically and speedily generate an initial contour of level set evolution. Second, we devise several new functions to estimate the controlling parameters of the level set evolution based on the clustering results and image characteristics. Third, the reaction diffusion method is adopted to supersede the distance regularization term of RSF-level set model, which can improve the accuracy and speed of segmentation effectively with less manual intervention. Experimental results demonstrate the performance and efficiency of the proposed model for medical image segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. A new LPV modeling approach using PCA-based parameter set mapping to design a PSS

    Directory of Open Access Journals (Sweden)

    Mohammad B. Abolhasani Jabali

    2017-01-01

    Full Text Available This paper presents a new methodology for the modeling and control of power systems based on an uncertain polytopic linear parameter-varying (LPV approach using parameter set mapping with principle component analysis (PCA. An LPV representation of the power system dynamics is generated by linearization of its differential-algebraic equations about the transient operating points for some given specific faults containing the system nonlinear properties. The time response of the output signal in the transient state plays the role of the scheduling signal that is used to construct the LPV model. A set of sample points of the dynamic response is formed to generate an initial LPV model. PCA-based parameter set mapping is used to reduce the number of models and generate a reduced LPV model. This model is used to design a robust pole placement controller to assign the poles of the power system in a linear matrix inequality (LMI region, such that the response of the power system has a proper damping ratio for all of the different oscillation modes. The proposed scheme is applied to controller synthesis of a power system stabilizer, and its performance is compared with a tuned standard conventional PSS using nonlinear simulation of a multi-machine power network. The results under various conditions show the robust performance of the proposed controller.

  7. A new LPV modeling approach using PCA-based parameter set mapping to design a PSS.

    Science.gov (United States)

    Jabali, Mohammad B Abolhasani; Kazemi, Mohammad H

    2017-01-01

    This paper presents a new methodology for the modeling and control of power systems based on an uncertain polytopic linear parameter-varying (LPV) approach using parameter set mapping with principle component analysis (PCA). An LPV representation of the power system dynamics is generated by linearization of its differential-algebraic equations about the transient operating points for some given specific faults containing the system nonlinear properties. The time response of the output signal in the transient state plays the role of the scheduling signal that is used to construct the LPV model. A set of sample points of the dynamic response is formed to generate an initial LPV model. PCA-based parameter set mapping is used to reduce the number of models and generate a reduced LPV model. This model is used to design a robust pole placement controller to assign the poles of the power system in a linear matrix inequality (LMI) region, such that the response of the power system has a proper damping ratio for all of the different oscillation modes. The proposed scheme is applied to controller synthesis of a power system stabilizer, and its performance is compared with a tuned standard conventional PSS using nonlinear simulation of a multi-machine power network. The results under various conditions show the robust performance of the proposed controller.

  8. Evaluating climate model performance with various parameter sets using observations over the recent past

    Directory of Open Access Journals (Sweden)

    M. F. Loutre

    2011-05-01

    Full Text Available Many sources of uncertainty limit the accuracy of climate projections. Among them, we focus here on the parameter uncertainty, i.e. the imperfect knowledge of the values of many physical parameters in a climate model. Therefore, we use LOVECLIM, a global three-dimensional Earth system model of intermediate complexity and vary several parameters within a range based on the expert judgement of model developers. Nine climatic parameter sets and three carbon cycle parameter sets are selected because they yield present-day climate simulations coherent with observations and they cover a wide range of climate responses to doubled atmospheric CO2 concentration and freshwater flux perturbation in the North Atlantic. Moreover, they also lead to a large range of atmospheric CO2 concentrations in response to prescribed emissions. Consequently, we have at our disposal 27 alternative versions of LOVECLIM (each corresponding to one parameter set that provide very different responses to some climate forcings. The 27 model versions are then used to illustrate the range of responses provided over the recent past, to compare the time evolution of climate variables over the time interval for which they are available (the last few decades up to more than one century and to identify the outliers and the "best" versions over that particular time span. For example, between 1979 and 2005, the simulated global annual mean surface temperature increase ranges from 0.24 °C to 0.64 °C, while the simulated increase in atmospheric CO2 concentration varies between 40 and 50 ppmv. Measurements over the same period indicate an increase in global annual mean surface temperature of 0.45 °C (Brohan et al., 2006 and an increase in atmospheric CO2 concentration of 44 ppmv (Enting et al., 1994; GLOBALVIEW-CO2, 2006. Only a few parameter sets yield simulations that reproduce the observed key variables of the climate system over the last

  9. Modeling Neurovascular Coupling from Clustered Parameter Sets for Multimodal EEG-NIRS

    Directory of Open Access Journals (Sweden)

    M. Tanveer Talukdar

    2015-01-01

    Full Text Available Despite significant improvements in neuroimaging technologies and analysis methods, the fundamental relationship between local changes in cerebral hemodynamics and the underlying neural activity remains largely unknown. In this study, a data driven approach is proposed for modeling this neurovascular coupling relationship from simultaneously acquired electroencephalographic (EEG and near-infrared spectroscopic (NIRS data. The approach uses gamma transfer functions to map EEG spectral envelopes that reflect time-varying power variations in neural rhythms to hemodynamics measured with NIRS during median nerve stimulation. The approach is evaluated first with simulated EEG-NIRS data and then by applying the method to experimental EEG-NIRS data measured from 3 human subjects. Results from the experimental data indicate that the neurovascular coupling relationship can be modeled using multiple sets of gamma transfer functions. By applying cluster analysis, statistically significant parameter sets were found to predict NIRS hemodynamics from EEG spectral envelopes. All subjects were found to have significant clustered parameters (P<0.05 for EEG-NIRS data fitted using gamma transfer functions. These results suggest that the use of gamma transfer functions followed by cluster analysis of the resulting parameter sets may provide insights into neurovascular coupling in human neuroimaging data.

  10. Recalibrating disease parameters for increasing realism in modeling epidemics in closed settings.

    Science.gov (United States)

    Bioglio, Livio; Génois, Mathieu; Vestergaard, Christian L; Poletto, Chiara; Barrat, Alain; Colizza, Vittoria

    2016-11-14

    The homogeneous mixing assumption is widely adopted in epidemic modelling for its parsimony and represents the building block of more complex approaches, including very detailed agent-based models. The latter assume homogeneous mixing within schools, workplaces and households, mostly for the lack of detailed information on human contact behaviour within these settings. The recent data availability on high-resolution face-to-face interactions makes it now possible to assess the goodness of this simplified scheme in reproducing relevant aspects of the infection dynamics. We consider empirical contact networks gathered in different contexts, as well as synthetic data obtained through realistic models of contacts in structured populations. We perform stochastic spreading simulations on these contact networks and in populations of the same size under a homogeneous mixing hypothesis. We adjust the epidemiological parameters of the latter in order to fit the prevalence curve of the contact epidemic model. We quantify the agreement by comparing epidemic peak times, peak values, and epidemic sizes. Good approximations of the peak times and peak values are obtained with the homogeneous mixing approach, with a median relative difference smaller than 20 % in all cases investigated. Accuracy in reproducing the peak time depends on the setting under study, while for the peak value it is independent of the setting. Recalibration is found to be linear in the epidemic parameters used in the contact data simulations, showing changes across empirical settings but robustness across groups and population sizes. An adequate rescaling of the epidemiological parameters can yield a good agreement between the epidemic curves obtained with a real contact network and a homogeneous mixing approach in a population of the same size. The use of such recalibrated homogeneous mixing approximations would enhance the accuracy and realism of agent-based simulations and limit the intrinsic biases of

  11. Clinical validation of the LKB model and parameter sets for predicting radiation-induced pneumonitis from breast cancer radiotherapy

    International Nuclear Information System (INIS)

    Tsougos, Ioannis; Mavroidis, Panayiotis; Theodorou, Kyriaki; Rajala, J; Pitkaenen, M A; Holli, K; Ojala, A T; Hyoedynmaa, S; Jaervenpaeae, Ritva; Lind, Bengt K; Kappas, Constantin

    2006-01-01

    The choice of the appropriate model and parameter set in determining the relation between the incidence of radiation pneumonitis and dose distribution in the lung is of great importance, especially in the case of breast radiotherapy where the observed incidence is fairly low. From our previous study based on 150 breast cancer patients, where the fits of dose-volume models to clinical data were estimated (Tsougos et al 2005 Evaluation of dose-response models and parameters predicting radiation induced pneumonitis using clinical data from breast cancer radiotherapy Phys. Med. Biol. 50 3535-54), one could get the impression that the relative seriality is significantly better than the LKB NTCP model. However, the estimation of the different NTCP models was based on their goodness-of-fit on clinical data, using various sets of published parameters from other groups, and this fact may provisionally justify the results. Hence, we sought to investigate further the LKB model, by applying different published parameter sets for the very same group of patients, in order to be able to compare the results. It was shown that, depending on the parameter set applied, the LKB model is able to predict the incidence of radiation pneumonitis with acceptable accuracy, especially when implemented on a sub-group of patients (120) receiving D-bar-bar vertical bar EUD higher than 8 Gy. In conclusion, the goodness-of-fit of a certain radiobiological model on a given clinical case is closely related to the selection of the proper scoring criteria and parameter set as well as to the compatibility of the clinical case from which the data were derived. (letter to the editor)

  12. Parameter setting and input reduction

    NARCIS (Netherlands)

    Evers, A.; van Kampen, N.J.|info:eu-repo/dai/nl/126439737

    2008-01-01

    The language acquisition procedure identifies certain properties of the target grammar before others. The evidence from the input is processed in a stepwise order. Section 1 equates that order and its typical effects with an order of parameter setting. The question is how the acquisition procedure

  13. An Ultrafast Maximum Power Point Setting Scheme for Photovoltaic Arrays Using Model Parameter Identification

    Directory of Open Access Journals (Sweden)

    Zhaohui Cen

    2015-01-01

    Full Text Available Maximum power point tracking (MPPT for photovoltaic (PV arrays is essential to optimize conversion efficiency under variable and nonuniform irradiance conditions. Unfortunately, conventional MPPT algorithms such as perturb and observe (P&O, incremental conductance, and current sweep method need to iterate command current or voltage and frequently operate power converters with associated losses. Under partial overcast conditions, tracking the real MPP in multipeak P-I or P-V curve model becomes highly challenging, with associated increase in search time and converter operation, leading to unnecessary power being lost in the MPP tracking process. In this paper, the noted drawbacks in MPPT-controlled converters are addressed. In order to separate the search algorithms from converter operation, a model parameter identification approach is presented to estimate insolation conditions of each PV panel and build a real-time overall P-I curve of PV arrays. Subsequently a simple but effective global MPPT algorithm is proposed to track the MPP in the overall P-I curve obtained from the identified PV array model, ensuring that the converter works at the MPP. The novel MPPT is ultrafast, resulting in conserved power in the tracking process. Finally, simulations in different scenarios are executed to validate the novel scheme’s effectiveness and advantages.

  14. The Model Confidence Set

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger; Nason, James M.

    The paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The MCS...

  15. Final Report for NFE-07-00912: Development of Model Fuels Experimental Engine Data Base & Kinetic Modeling Parameter Sets

    Energy Technology Data Exchange (ETDEWEB)

    Bunting, Bruce G [ORNL

    2012-10-01

    The automotive and engine industries are in a period of very rapid change being driven by new emission standards, new types of after treatment, new combustion strategies, the introduction of new fuels, and drive for increased fuel economy and efficiency. The rapid pace of these changes has put more pressure on the need for modeling of engine combustion and performance, in order to shorten product design and introduction cycles. New combustion strategies include homogeneous charge compression ignition (HCCI), partial-premixed combustion compression ignition (PCCI), and dilute low temperature combustion which are being developed for lower emissions and improved fuel economy. New fuels include bio-fuels such as ethanol or bio-diesel, drop-in bio-derived fuels and those derived from new crude oil sources such as gas-to-liquids, coal-to-liquids, oil sands, oil shale, and wet natural gas. Kinetic modeling of the combustion process for these new combustion regimes and fuels is necessary in order to allow modeling and performance assessment for engine design purposes. In this research covered by this CRADA, ORNL developed and supplied experimental data related to engine performance with new fuels and new combustion strategies along with interpretation and analysis of such data and consulting to Reaction Design, Inc. (RD). RD performed additional analysis of this data in order to extract important parameters and to confirm engine and kinetic models. The data generated was generally published to make it available to the engine and automotive design communities and also to the Reaction Design Model Fuels Consortium (MFC).

  16. Development of a new version of the Liverpool Malaria Model. I. Refining the parameter settings and mathematical formulation of basic processes based on a literature review

    Directory of Open Access Journals (Sweden)

    Jones Anne E

    2011-02-01

    Full Text Available Abstract Background A warm and humid climate triggers several water-associated diseases such as malaria. Climate- or weather-driven malaria models, therefore, allow for a better understanding of malaria transmission dynamics. The Liverpool Malaria Model (LMM is a mathematical-biological model of malaria parasite dynamics using daily temperature and precipitation data. In this study, the parameter settings of the LMM are refined and a new mathematical formulation of key processes related to the growth and size of the vector population are developed. Methods One of the most comprehensive studies to date in terms of gathering entomological and parasitological information from the literature was undertaken for the development of a new version of an existing malaria model. The knowledge was needed to allow the justification of new settings of various model parameters and motivated changes of the mathematical formulation of the LMM. Results The first part of the present study developed an improved set of parameter settings and mathematical formulation of the LMM. Important modules of the original LMM version were enhanced in order to achieve a higher biological and physical accuracy. The oviposition as well as the survival of immature mosquitoes were adjusted to field conditions via the application of a fuzzy distribution model. Key model parameters, including the mature age of mosquitoes, the survival probability of adult mosquitoes, the human blood index, the mosquito-to-human (human-to-mosquito transmission efficiency, the human infectious age, the recovery rate, as well as the gametocyte prevalence, were reassessed by means of entomological and parasitological observations. This paper also revealed that various malaria variables lack information from field studies to be set properly in a malaria modelling approach. Conclusions Due to the multitude of model parameters and the uncertainty involved in the setting of parameters, an extensive

  17. Lumped-parameter models

    Energy Technology Data Exchange (ETDEWEB)

    Ibsen, Lars Bo; Liingaard, M.

    2006-12-15

    A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. In this technical report the steps of establishing a lumped-parameter model are presented. Following sections are included in this report: Static and dynamic formulation, Simple lumped-parameter models and Advanced lumped-parameter models. (au)

  18. Parameter sets for upper and lower bounds on soil-to-indoor-air contaminant attenuation predicted by the Johnson and Ettinger vapor intrusion model

    Science.gov (United States)

    Tillman, Fred D.; Weaver, James W.

    Migration of volatile chemicals from the subsurface into overlying buildings is known as vapor intrusion (VI). Under certain circumstances, people living in homes above contaminated soil or ground water may be exposed to harmful levels of these vapors. A popular VI screening-level algorithm widely used in the United States, Canada and the UK to assess this potential risk is the "Johnson and Ettinger" (J&E) model. Concern exists over using the J&E model for deciding whether or not further action is necessary at sites, as many parameters are not routinely measured (or are un-measurable). Using EPA-recommended ranges of parameter values for nine soil-type/source depth combinations, input parameter sets were identified that correspond to bounding results of the J&E model. The results established the existence of generic upper and lower bound parameter sets for maximum and minimum exposure for all soil types and depths investigated. Using the generic upper and lower bound parameter sets, an analysis can be performed that, given the limitations of the input ranges and the model, bounds the attenuation factor in a VI investigation.

  19. Support Vector Data Description Model to Map Specific Land Cover with Optimal Parameters Determined from a Window-Based Validation Set.

    Science.gov (United States)

    Zhang, Jinshui; Yuan, Zhoumiqi; Shuai, Guanyuan; Pan, Yaozhong; Zhu, Xiufang

    2017-04-26

    This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD), to determine optimal parameters for support vector data description (SVDD) model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM) method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient ( C ) and kernel width ( s ), in mapping homogeneous specific land cover.

  20. Setting parameters in the cold chain

    Directory of Open Access Journals (Sweden)

    Victoria Rodríguez

    2011-12-01

    Full Text Available Breaks in the cold chain are important economic losses in food and pharmaceutical companies. Many of the failures in the cold chain are due to improper adjustment of equipment parameters such as setting the parameters for theoretical conditions, without a corresponding check in normal operation. The companies that transport refrigeratedproducts must be able to adjust the parameters of the equipment in an easy and quick to adapt their functioning to changing environmental conditions. This article presents the results of a study carried out with a food distribution company. The main objective of the study is to verify the effectiveness of Six Sigma as a methodological toolto adjust the equipment in the cold chain. The second objective is more speciÞ c and is to study the impact of: reducing the volume of storage in the truck, the initial temperature of the storage areain the truck and the frequency of defrost in the transport of refrigerated products.

  1. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  2. Economic communication model set

    Science.gov (United States)

    Zvereva, Olga M.; Berg, Dmitry B.

    2017-06-01

    This paper details findings from the research work targeted at economic communications investigation with agent-based models usage. The agent-based model set was engineered to simulate economic communications. Money in the form of internal and external currencies was introduced into the models to support exchanges in communications. Every model, being based on the general concept, has its own peculiarities in algorithm and input data set since it was engineered to solve the specific problem. Several and different origin data sets were used in experiments: theoretic sets were estimated on the basis of static Leontief's equilibrium equation and the real set was constructed on the basis of statistical data. While simulation experiments, communication process was observed in dynamics, and system macroparameters were estimated. This research approved that combination of an agent-based and mathematical model can cause a synergetic effect.

  3. A global data set of land-surface parameters

    International Nuclear Information System (INIS)

    Claussen, M.; Lohmann, U.; Roeckner, E.; Schulzweida, U.

    1994-01-01

    A global data set of land surface parameters is provided for the climate model ECHAM developed at the Max-Planck-Institut fuer Meteorologie in Hamburg. These parameters are: background (surface) albedo α, surface roughness length z 0y , leaf area index LAI, fractional vegetation cover or vegetation ratio c y , and forest ratio c F . The global set of surface parameters is constructed by allocating parameters to major exosystem complexes of Olson et al. (1983). The global distribution of ecosystem complexes is given at a resolution of 0.5 0 x 0.5 0 . The latter data are compatible with the vegetation types used in the BIOME model of Prentice et al. (1992) which is a potential candidate of an interactive submodel within a comprehensive model of the climate system. (orig.)

  4. Response model parameter linking

    NARCIS (Netherlands)

    Barrett, M.L.D.

    2015-01-01

    With a few exceptions, the problem of linking item response model parameters from different item calibrations has been conceptualized as an instance of the problem of equating observed scores on different test forms. This thesis argues, however, that the use of item response models does not require

  5. Can the Responses of Photosynthesis and Stomatal Conductance to Water and Nitrogen Stress Combinations Be Modeled Using a Single Set of Parameters?

    Science.gov (United States)

    Zhang, Ningyi; Li, Gang; Yu, Shanxiang; An, Dongsheng; Sun, Qian; Luo, Weihong; Yin, Xinyou

    2017-01-01

    Accurately predicting photosynthesis in response to water and nitrogen stress is the first step toward predicting crop growth, yield and many quality traits under fluctuating environmental conditions. While mechanistic models are capable of predicting photosynthesis under fluctuating environmental conditions, simplifying the parameterization procedure is important toward a wide range of model applications. In this study, the biochemical photosynthesis model of Farquhar, von Caemmerer and Berry (the FvCB model) and the stomatal conductance model of Ball, Woodrow and Berry which was revised by Leuning and Yin (the BWB-Leuning-Yin model) were parameterized for Lilium (L. auratum × speciosum “Sorbonne”) grown under different water and nitrogen conditions. Linear relationships were found between biochemical parameters of the FvCB model and leaf nitrogen content per unit leaf area (Na), and between mesophyll conductance and Na under different water and nitrogen conditions. By incorporating these Na-dependent linear relationships, the FvCB model was able to predict the net photosynthetic rate (An) in response to all water and nitrogen conditions. In contrast, stomatal conductance (gs) can be accurately predicted if parameters in the BWB-Leuning-Yin model were adjusted specifically to water conditions; otherwise gs was underestimated by 9% under well-watered conditions and was overestimated by 13% under water-deficit conditions. However, the 13% overestimation of gs under water-deficit conditions led to only 9% overestimation of An by the coupled FvCB and BWB-Leuning-Yin model whereas the 9% underestimation of gs under well-watered conditions affected little the prediction of An. Our results indicate that to accurately predict An and gs under different water and nitrogen conditions, only a few parameters in the BWB-Leuning-Yin model need to be adjusted according to water conditions whereas all other parameters are either conservative or can be adjusted according to

  6. The STATFLUX code: a statistical method for calculation of flow and set of parameters, based on the Multiple-Compartment Biokinetical Model

    Science.gov (United States)

    Garcia, F.; Mesa, J.; Arruda-Neto, J. D. T.; Helene, O.; Vanin, V.; Milian, F.; Deppman, A.; Rodrigues, T. E.; Rodriguez, O.

    2007-03-01

    The code STATFLUX, implementing a new and simple statistical procedure for the calculation of transfer coefficients in radionuclide transport to animals and plants, is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. Flow parameters were estimated by employing two different least-squares procedures: Derivative and Gauss-Marquardt methods, with the available experimental data of radionuclide concentrations as the input functions of time. The solution of the inverse problem, which relates a given set of flow parameter with the time evolution of concentration functions, is achieved via a Monte Carlo simulation procedure. Program summaryTitle of program:STATFLUX Catalogue identifier:ADYS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYS_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computer for which the program is designed and others on which it has been tested:Micro-computer with Intel Pentium III, 3.0 GHz Installation:Laboratory of Linear Accelerator, Department of Experimental Physics, University of São Paulo, Brazil Operating system:Windows 2000 and Windows XP Programming language used:Fortran-77 as implemented in Microsoft Fortran 4.0. NOTE: Microsoft Fortran includes non-standard features which are used in this program. Standard Fortran compilers such as, g77, f77, ifort and NAG95, are not able to compile the code and therefore it has not been possible for the CPC Program Library to test the program. Memory required to execute with typical data:8 Mbytes of RAM memory and 100 MB of Hard disk memory No. of bits in a word:16 No. of lines in distributed program, including test data, etc.:6912 No. of bytes in distributed program, including test data, etc.:229 541 Distribution format:tar.gz Nature of the physical problem:The investigation of transport mechanisms for

  7. Robust estimation of hydrological model parameters

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-11-01

    Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.

  8. Fundamental safety-parameter set for boiling water reactors

    International Nuclear Information System (INIS)

    Johnson, C.B.; Mollerus, F.S.; Carmichael, L.A.

    1980-12-01

    A minimum set of parameters is proposed which will indicate the overall safety status of a commercial Boiling Water Reactor. Parameters were selected by identifying those sufficient to determine if functions of fundamental importance to safety are being accomplished. The selected set was subjected to verification by comparison with a broad spectrum of postulated events. Appropriate control room display of the parameter set should assist the operators in determining the safety status of the plant quickly and accurately, even if a plant event is not immediately understood

  9. Parameter Estimation for Thurstone Choice Models

    Energy Technology Data Exchange (ETDEWEB)

    Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-24

    We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.

  10. Linking Item Response Model Parameters.

    Science.gov (United States)

    van der Linden, Wim J; Barrett, Michelle D

    2016-09-01

    With a few exceptions, the problem of linking item response model parameters from different item calibrations has been conceptualized as an instance of the problem of test equating scores on different test forms. This paper argues, however, that the use of item response models does not require any test score equating. Instead, it involves the necessity of parameter linking due to a fundamental problem inherent in the formal nature of these models-their general lack of identifiability. More specifically, item response model parameters need to be linked to adjust for the different effects of the identifiability restrictions used in separate item calibrations. Our main theorems characterize the formal nature of these linking functions for monotone, continuous response models, derive their specific shapes for different parameterizations of the 3PL model, and show how to identify them from the parameter values of the common items or persons in different linking designs.

  11. The evaluation of set of criticality parameters using scale system

    International Nuclear Information System (INIS)

    Abe, Alfredo; Sanchez, Andrea; Yamaguchi, Mistuo

    2009-01-01

    In evaluating the criticality safety of the nuclear fuel facility, it is important to apply a consistent methodology, which consider every aspects concerning various types of criticality parameters. Usually, the critical parameters are compiled and arranged into handbooks, and these handbooks are based on experience with nuclear facilities, experimental data from criticality safety research facilities, and theoretical studies performed using numerical simulations. Most of criticality safety evaluation can be addressed using the criticality parameters data directly from handbook, but some critical parameters for a specific chemical mixtures and/or enrichment are not be available. Consequently, not available parameters has to be evaluated. This work present the methodology to evaluate a set of critical parameters using SCALE system for various types of mixtures present at nuclear fuel cycle facilities for two different level of enrichment, the results are verified in the independent calculation using MCNP Monte Carlo Code. (author)

  12. A Regionalization Approach to select the final watershed parameter set among the Pareto solutions

    Science.gov (United States)

    Park, G. H.; Micheletty, P. D.; Carney, S.; Quebbeman, J.; Day, G. N.

    2017-12-01

    The calibration of hydrological models often results in model parameters that are inconsistent with those from neighboring basins. Considering that physical similarity exists within neighboring basins some of the physically related parameters should be consistent among them. Traditional manual calibration techniques require an iterative process to make the parameters consistent, which takes additional effort in model calibration. We developed a multi-objective optimization procedure to calibrate the National Weather Service (NWS) Research Distributed Hydrological Model (RDHM), using the Nondominant Sorting Genetic Algorithm (NSGA-II) with expert knowledge of the model parameter interrelationships one objective function. The multi-objective algorithm enables us to obtain diverse parameter sets that are equally acceptable with respect to the objective functions and to choose one from the pool of the parameter sets during a subsequent regionalization step. Although all Pareto solutions are non-inferior, we exclude some of the parameter sets that show extremely values for any of the objective functions to expedite the selection process. We use an apriori model parameter set derived from the physical properties of the watershed (Koren et al., 2000) to assess the similarity for a given parameter across basins. Each parameter is assigned a weight based on its assumed similarity, such that parameters that are similar across basins are given higher weights. The parameter weights are useful to compute a closeness measure between Pareto sets of nearby basins. The regionalization approach chooses the Pareto parameter sets that minimize the closeness measure of the basin being regionalized. The presentation will describe the results of applying the regionalization approach to a set of pilot basins in the Upper Colorado basin as part of a NASA-funded project.

  13. Source term modelling parameters for Project-90

    International Nuclear Information System (INIS)

    Shaw, W.; Smith, G.; Worgan, K.; Hodgkinson, D.; Andersson, K.

    1992-04-01

    This document summarises the input parameters for the source term modelling within Project-90. In the first place, the parameters relate to the CALIBRE near-field code which was developed for the Swedish Nuclear Power Inspectorate's (SKI) Project-90 reference repository safety assessment exercise. An attempt has been made to give best estimate values and, where appropriate, a range which is related to variations around base cases. It should be noted that the data sets contain amendments to those considered by KBS-3. In particular, a completely new set of inventory data has been incorporated. The information given here does not constitute a complete set of parameter values for all parts of the CALIBRE code. Rather, it gives the key parameter values which are used in the constituent models within CALIBRE and the associated studies. For example, the inventory data acts as an input to the calculation of the oxidant production rates, which influence the generation of a redox front. The same data is also an initial value data set for the radionuclide migration component of CALIBRE. Similarly, the geometrical parameters of the near-field are common to both sub-models. The principal common parameters are gathered here for ease of reference and avoidance of unnecessary duplication and transcription errors. (au)

  14. Analytical one parameter method for PID motion controller settings

    NARCIS (Netherlands)

    van Dijk, Johannes; Aarts, Ronald G.K.M.

    2012-01-01

    In this paper analytical expressions for PID-controllers settings for electromechanical motion systems are presented. It will be shown that by an adequate frequency domain oriented parametrization, the parameters of a PID-controller are analytically dependent on one variable only, the cross-over

  15. Reliability analysis of a sensitive and independent stabilometry parameter set.

    Science.gov (United States)

    Nagymáté, Gergely; Orlovits, Zsanett; Kiss, Rita M

    2018-01-01

    Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54-0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals.

  16. Modelling and parameter estimation of dynamic systems

    CERN Document Server

    Raol, JR; Singh, J

    2004-01-01

    Parameter estimation is the process of using observations from a system to develop mathematical models that adequately represent the system dynamics. The assumed model consists of a finite set of parameters, the values of which are calculated using estimation techniques. Most of the techniques that exist are based on least-square minimization of error between the model response and actual system response. However, with the proliferation of high speed digital computers, elegant and innovative techniques like filter error method, H-infinity and Artificial Neural Networks are finding more and mor

  17. A parameter set for a double-null DEMO reactor

    International Nuclear Information System (INIS)

    Cooke, P.I.H.

    1987-01-01

    The present study is aimed at commenting on the reactor-relevance of the design principles and technology being proposed for NET. The authors propose that a double-null device serve as a basis for a NET-based demonstration reactor. Calculations are carried out to determine the parameter set for reactors based on the double-null NET design, and the results are presented in tabular form. (U.K.)

  18. Exploiting intrinsic fluctuations to identify model parameters.

    Science.gov (United States)

    Zimmer, Christoph; Sahle, Sven; Pahle, Jürgen

    2015-04-01

    Parameterisation of kinetic models plays a central role in computational systems biology. Besides the lack of experimental data of high enough quality, some of the biggest challenges here are identification issues. Model parameters can be structurally non-identifiable because of functional relationships. Noise in measured data is usually considered to be a nuisance for parameter estimation. However, it turns out that intrinsic fluctuations in particle numbers can make parameters identifiable that were previously non-identifiable. The authors present a method to identify model parameters that are structurally non-identifiable in a deterministic framework. The method takes time course recordings of biochemical systems in steady state or transient state as input. Often a functional relationship between parameters presents itself by a one-dimensional manifold in parameter space containing parameter sets of optimal goodness. Although the system's behaviour cannot be distinguished on this manifold in a deterministic framework it might be distinguishable in a stochastic modelling framework. Their method exploits this by using an objective function that includes a measure for fluctuations in particle numbers. They show on three example models, immigration-death, gene expression and Epo-EpoReceptor interaction, that this resolves the non-identifiability even in the case of measurement noise with known amplitude. The method is applied to partially observed recordings of biochemical systems with measurement noise. It is simple to implement and it is usually very fast to compute. This optimisation can be realised in a classical or Bayesian fashion.

  19. Can the responses of photosynthesis and stomatal conductance to water and nitrogen stress combinations be modeled using a single set of parameters?

    NARCIS (Netherlands)

    Zhang, Ningyi; Li, Gang; Yu, Shanxiang; An, Dongsheng; Sun, Qian; Luo, Weihong; Yin, Xinyou

    2017-01-01

    Accurately predicting photosynthesis in response to water and nitrogen stress is the first step toward predicting crop growth, yield and many quality traits under fluctuating environmental conditions. While mechanistic models are capable of predicting photosynthesis under fluctuating environmental

  20. Allowable variance set on left ventricular function parameter

    International Nuclear Information System (INIS)

    Zhou Li'na; Qi Zhongzhi; Zeng Yu; Ou Xiaohong; Li Lin

    2010-01-01

    Purpose: To evaluate the influence of allowable Variance settings on left ventricular function parameter of the arrhythmia patients during gated myocardial perfusion imaging. Method: 42 patients with evident arrhythmia underwent myocardial perfusion SPECT, 3 different allowable variance with 20%, 60%, 100% would be set before acquisition for every patients,and they will be acquired simultaneously. After reconstruction by Astonish, end-diastole volume(EDV) and end-systolic volume (ESV) and left ventricular ejection fraction (LVEF) would be computed with Quantitative Gated SPECT(QGS). Using SPSS software EDV, ESV, EF values of analysis of variance. Result: there is no statistical difference between three groups. Conclusion: arrhythmia patients undergo Gated myocardial perfusion imaging, Allowable Variance settings on EDV, ESV, EF value does not have a statistical meaning. (authors)

  1. The Study of the Optimal Parameter Settings in a Hospital Supply Chain System in Taiwan

    Directory of Open Access Journals (Sweden)

    Hung-Chang Liao

    2014-01-01

    Full Text Available This study proposed the optimal parameter settings for the hospital supply chain system (HSCS when either the total system cost (TSC or patient safety level (PSL (or both simultaneously was considered as the measure of the HSCS’s performance. Four parameters were considered in the HSCS: safety stock, maximum inventory level, transportation capacity, and the reliability of the HSCS. A full-factor experimental design was used to simulate an HSCS for the purpose of collecting data. The response surface method (RSM was used to construct the regression model, and a genetic algorithm (GA was applied to obtain the optimal parameter settings for the HSCS. The results show that the best method of obtaining the optimal parameter settings for the HSCS is the simultaneous consideration of both the TSC and the PSL to measure performance. Also, the results of sensitivity analysis based on the optimal parameter settings were used to derive adjustable strategies for the decision-makers.

  2. Review of the different methods to derive average spacing from resolved resonance parameters sets

    International Nuclear Information System (INIS)

    Fort, E.; Derrien, H.; Lafond, D.

    1979-12-01

    The average spacing of resonances is an important parameter for statistical model calculations, especially concerning non fissile nuclei. The different methods to derive this average value from resonance parameters sets have been reviewed and analyzed in order to tentatively detect their respective weaknesses and propose recommendations. Possible improvements are suggested

  3. Setting limits on supersymmetry using simplified models

    CERN Document Server

    Gutschow, C.

    2012-01-01

    Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical implications. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be re-cast in this manner into almost any theoretical framework, includ...

  4. A market model: uncertainty and reachable sets

    Directory of Open Access Journals (Sweden)

    Raczynski Stanislaw

    2015-01-01

    Full Text Available Uncertain parameters are always present in models that include human factor. In marketing the uncertain consumer behavior makes it difficult to predict the future events and elaborate good marketing strategies. Sometimes uncertainty is being modeled using stochastic variables. Our approach is quite different. The dynamic market with uncertain parameters is treated using differential inclusions, which permits to determine the corresponding reachable sets. This is not a statistical analysis. We are looking for solutions to the differential inclusions. The purpose of the research is to find the way to obtain and visualise the reachable sets, in order to know the limits for the important marketing variables. The modeling method consists in defining the differential inclusion and find its solution, using the differential inclusion solver developed by the author. As the result we obtain images of the reachable sets where the main control parameter is the share of investment, being a part of the revenue. As an additional result we also can define the optimal investment strategy. The conclusion is that the differential inclusion solver can be a useful tool in market model analysis.

  5. Compositional models for credal sets

    Czech Academy of Sciences Publication Activity Database

    Vejnarová, Jiřina

    2017-01-01

    Roč. 90, č. 1 (2017), s. 359-373 ISSN 0888-613X R&D Projects: GA ČR(CZ) GA16-12010S Institutional support: RVO:67985556 Keywords : Imprecise probabilities * Credal sets * Multidimensional models * Conditional independence Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 2.845, year: 2016 http://library.utia.cas.cz/separaty/2017/MTR/vejnarova-0483288.pdf

  6. Optimizing incomplete sample designs for item response model parameters

    NARCIS (Netherlands)

    van der Linden, Willem J.

    Several models for optimizing incomplete sample designs with respect to information on the item parameters are presented. The following cases are considered: (1) known ability parameters; (2) unknown ability parameters; (3) item sets with multiple ability scales; and (4) response models with

  7. Seasonal evolution of soil and plant parameters on the agricultural Gebesee test site: a database for the set-up and validation of EO-LDAS and satellite-aided retrieval models

    Science.gov (United States)

    Truckenbrodt, Sina C.; Schmullius, Christiane C.

    2018-03-01

    Ground reference data are a prerequisite for the calibration, update, and validation of retrieval models facilitating the monitoring of land parameters based on Earth Observation data. Here, we describe the acquisition of a comprehensive ground reference database which was created to test and validate the recently developed Earth Observation Land Data Assimilation System (EO-LDAS) and products derived from remote sensing observations in the visible and infrared range. In situ data were collected for seven crop types (winter barley, winter wheat, spring wheat, durum, winter rape, potato, and sugar beet) cultivated on the agricultural Gebesee test site, central Germany, in 2013 and 2014. The database contains information on hyperspectral surface reflectance factors, the evolution of biophysical and biochemical plant parameters, phenology, surface conditions, atmospheric states, and a set of ground control points. Ground reference data were gathered at an approximately weekly resolution and on different spatial scales to investigate variations within and between acreages. In situ data collected less than 1 day apart from satellite acquisitions (RapidEye, SPOT 5, Landsat-7 and -8) with a cloud coverage ≤ 25 % are available for 10 and 15 days in 2013 and 2014, respectively. The measurements show that the investigated growing seasons were characterized by distinct meteorological conditions causing interannual variations in the parameter evolution. Here, the experimental design of the field campaigns, and methods employed in the determination of all parameters, are described in detail. Insights into the database are provided and potential fields of application are discussed. The data will contribute to a further development of crop monitoring methods based on remote sensing techniques. The database is freely available at PANGAEA (https://doi.org/10.1594/PANGAEA.874251" target="_blank">https://doi.org/10.1594/PANGAEA.874251).

  8. A Novel Control Algorithm Expressions Set for not Negligible Resistive Parameters PM Brushless AC Motors

    Directory of Open Access Journals (Sweden)

    Renato RIZZO

    2012-08-01

    Full Text Available This paper deals with Permanent Magnet Brushless Motors. In particular is proposed a new set of control algorithm expressions that is realized taking into account resistive parameters of the motor, differently from simplified models of this type of motors where these parameters are usually neglected. The control is set up and an analysis of the performance is reported in the paper, where the validation of the new expressions is done with reference to a motor prototype particularly compact because is foreseen for application on tram propulsion drives. The results are evidenced in the last part of the paper.

  9. Parameter optimization for surface flux transport models

    Science.gov (United States)

    Whitbread, T.; Yeates, A. R.; Muñoz-Jaramillo, A.; Petrie, G. J. D.

    2017-11-01

    Accurate prediction of solar activity calls for precise calibration of solar cycle models. Consequently we aim to find optimal parameters for models which describe the physical processes on the solar surface, which in turn act as proxies for what occurs in the interior and provide source terms for coronal models. We use a genetic algorithm to optimize surface flux transport models using National Solar Observatory (NSO) magnetogram data for Solar Cycle 23. This is applied to both a 1D model that inserts new magnetic flux in the form of idealized bipolar magnetic regions, and also to a 2D model that assimilates specific shapes of real active regions. The genetic algorithm searches for parameter sets (meridional flow speed and profile, supergranular diffusivity, initial magnetic field, and radial decay time) that produce the best fit between observed and simulated butterfly diagrams, weighted by a latitude-dependent error structure which reflects uncertainty in observations. Due to the easily adaptable nature of the 2D model, the optimization process is repeated for Cycles 21, 22, and 24 in order to analyse cycle-to-cycle variation of the optimal solution. We find that the ranges and optimal solutions for the various regimes are in reasonable agreement with results from the literature, both theoretical and observational. The optimal meridional flow profiles for each regime are almost entirely within observational bounds determined by magnetic feature tracking, with the 2D model being able to accommodate the mean observed profile more successfully. Differences between models appear to be important in deciding values for the diffusive and decay terms. In like fashion, differences in the behaviours of different solar cycles lead to contrasts in parameters defining the meridional flow and initial field strength.

  10. Waveform inversion in acoustic orthorhombic media with a practical set of parameters

    KAUST Repository

    Masmoudi, Nabil

    2017-08-17

    Full-waveform inversion (FWI) in anisotropic media is overall challenging, mainly because of the large computational cost, especially in 3D, and the potential trade-offs between the model parameters needed to describe such a media. We propose an efficient 3D FWI implementation for orthorhombic anisotropy under the acoustic assumption. Our modeling is based on solving the pseudo-differential orthorhombic wave equation split into a differential operator and a scalar one. The modeling is computationally efficient and free of shear wave artifacts. Using the adjoint state method, we derive the gradients with respect to a practical set of parameters describing the acoustic orthorhombic model, made of one velocity and five dimensionless parameters. This parameterization allows us to use a multi-stage model inversion strategy based on the continuity of the scattering potential of the parameters as we go from higher symmetry anisotropy to lower ones. We apply the proposed approach on a modified SEG-EAGE overthrust synthetic model. The quality of the inverted model suggest that we may recover only 4 parameters, with different resolution scales depending on the scattering potential of these parameters.

  11. Exploring the interdependencies between parameters in a material model.

    Energy Technology Data Exchange (ETDEWEB)

    Silling, Stewart Andrew; Fermen-Coker, Muge

    2014-01-01

    A method is investigated to reduce the number of numerical parameters in a material model for a solid. The basis of the method is to detect interdependencies between parameters within a class of materials of interest. The method is demonstrated for a set of material property data for iron and steel using the Johnson-Cook plasticity model.

  12. Setting of the Optimal Parameters of Melted Glass

    Czech Academy of Sciences Publication Activity Database

    Luptáková, Natália; Matejíčka, L.; Krečmer, N.

    2015-01-01

    Roč. 10, č. 1 (2015), s. 73-79 ISSN 1802-2308 Institutional support: RVO:68081723 Keywords : Striae * Glass * Glass melting * Regression * Optimal parameters Subject RIV: JH - Ceramics, Fire-Resistant Materials and Glass

  13. Clustering reveals limits of parameter identifiability in multi-parameter models of biochemical dynamics.

    Science.gov (United States)

    Nienałtowski, Karol; Włodarczyk, Michał; Lipniacki, Tomasz; Komorowski, Michał

    2015-09-29

    Compared to engineering or physics problems, dynamical models in quantitative biology typically depend on a relatively large number of parameters. Progress in developing mathematics to manipulate such multi-parameter models and so enable their efficient interplay with experiments has been slow. Existing solutions are significantly limited by model size. In order to simplify analysis of multi-parameter models a method for clustering of model parameters is proposed. It is based on a derived statistically meaningful measure of similarity between groups of parameters. The measure quantifies to what extend changes in values of some parameters can be compensated by changes in values of other parameters. The proposed methodology provides a natural mathematical language to precisely communicate and visualise effects resulting from compensatory changes in values of parameters. As a results, a relevant insight into identifiability analysis and experimental planning can be obtained. Analysis of NF-κB and MAPK pathway models shows that highly compensative parameters constitute clusters consistent with the network topology. The method applied to examine an exceptionally rich set of published experiments on the NF-κB dynamics reveals that the experiments jointly ensure identifiability of only 60% of model parameters. The method indicates which further experiments should be performed in order to increase the number of identifiable parameters. We currently lack methods that simplify broadly understood analysis of multi-parameter models. The introduced tools depict mutually compensative effects between parameters to provide insight regarding role of individual parameters, identifiability and experimental design. The method can also find applications in related methodological areas of model simplification and parameters estimation.

  14. Emergence and spread of antibiotic resistance: setting a parameter space.

    Science.gov (United States)

    Martínez, José Luis; Baquero, Fernando

    2014-05-01

    The emergence and spread of antibiotic resistance among human pathogens is a relevant problem for human health and one of the few evolution processes amenable to experimental studies. In the present review, we discuss some basic aspects of antibiotic resistance, including mechanisms of resistance, origin of resistance genes, and bottlenecks that modulate the acquisition and spread of antibiotic resistance among human pathogens. In addition, we analyse several parameters that modulate the evolution landscape of antibiotic resistance. Learning why some resistance mechanisms emerge but do not evolve after a first burst, whereas others can spread over the entire world very rapidly, mimicking a chain reaction, is important for predicting the evolution, and relevance for human health, of a given mechanism of resistance. Because of this, we propose that the emergence and spread of antibiotic resistance can only be understood in a multi-parameter space. Measuring the effect on antibiotic resistance of parameters such as contact rates, transfer rates, integration rates, replication rates, diversification rates, and selection rates, for different genes and organisms, growing under different conditions in distinct ecosystems, will allow for a better prediction of antibiotic resistance and possibilities of focused interventions.

  15. Regionalization of SWAT Model Parameters for Use in Ungauged Watersheds

    Directory of Open Access Journals (Sweden)

    Indrajeet Chaubey

    2010-11-01

    Full Text Available There has been a steady shift towards modeling and model-based approaches as primary methods of assessing watershed response to hydrologic inputs and land management, and of quantifying watershed-wide best management practice (BMP effectiveness. Watershed models often require some degree of calibration and validation to achieve adequate watershed and therefore BMP representation. This is, however, only possible for gauged watersheds. There are many watersheds for which there are very little or no monitoring data available, thus the question as to whether it would be possible to extend and/or generalize model parameters obtained through calibration of gauged watersheds to ungauged watersheds within the same region. This study explored the possibility of developing regionalized model parameter sets for use in ungauged watersheds. The study evaluated two regionalization methods: global averaging, and regression-based parameters, on the SWAT model using data from priority watersheds in Arkansas. Resulting parameters were tested and model performance determined on three gauged watersheds. Nash-Sutcliffe efficiencies (NS for stream flow obtained using regression-based parameters (0.53–0.83 compared well with corresponding values obtained through model calibration (0.45–0.90. Model performance obtained using global averaged parameter values was also generally acceptable (0.4 ≤ NS ≤ 0.75. Results from this study indicate that regionalized parameter sets for the SWAT model can be obtained and used for making satisfactory hydrologic response predictions in ungauged watersheds.

  16. Model parameter updating using Bayesian networks

    Energy Technology Data Exchange (ETDEWEB)

    Treml, C. A. (Christine A.); Ross, Timothy J.

    2004-01-01

    This paper outlines a model parameter updating technique for a new method of model validation using a modified model reference adaptive control (MRAC) framework with Bayesian Networks (BNs). The model parameter updating within this method is generic in the sense that the model/simulation to be validated is treated as a black box. It must have updateable parameters to which its outputs are sensitive, and those outputs must have metrics that can be compared to that of the model reference, i.e., experimental data. Furthermore, no assumptions are made about the statistics of the model parameter uncertainty, only upper and lower bounds need to be specified. This method is designed for situations where a model is not intended to predict a complete point-by-point time domain description of the item/system behavior; rather, there are specific points, features, or events of interest that need to be predicted. These specific points are compared to the model reference derived from actual experimental data. The logic for updating the model parameters to match the model reference is formed via a BN. The nodes of this BN consist of updateable model input parameters and the specific output values or features of interest. Each time the model is executed, the input/output pairs are used to adapt the conditional probabilities of the BN. Each iteration further refines the inferred model parameters to produce the desired model output. After parameter updating is complete and model inputs are inferred, reliabilities for the model output are supplied. Finally, this method is applied to a simulation of a resonance control cooling system for a prototype coupled cavity linac. The results are compared to experimental data.

  17. On parameter estimation in deformable models

    DEFF Research Database (Denmark)

    Fisker, Rune; Carstensen, Jens Michael

    1998-01-01

    Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian form...

  18. Optimal Inversion Parameters for Full Waveform Inversion using OBS Data Set

    Science.gov (United States)

    Kim, S.; Chung, W.; Shin, S.; Kim, D.; Lee, D.

    2017-12-01

    In recent years, full Waveform Inversion (FWI) has been the most researched technique in seismic data processing. It uses the residuals between observed and modeled data as an objective function; thereafter, the final subsurface velocity model is generated through a series of iterations meant to minimize the residuals.Research on FWI has expanded from acoustic media to elastic media. In acoustic media, the subsurface property is defined by P-velocity; however, in elastic media, properties are defined by multiple parameters, such as P-velocity, S-velocity, and density. Further, the elastic media can also be defined by Lamé constants, density or impedance PI, SI; consequently, research is being carried out to ascertain the optimal parameters.From results of advanced exploration equipment and Ocean Bottom Seismic (OBS) survey, it is now possible to obtain multi-component seismic data. However, to perform FWI on these data and generate an accurate subsurface model, it is important to determine optimal inversion parameters among (Vp, Vs, ρ), (λ, μ, ρ), and (PI, SI) in elastic media. In this study, staggered grid finite difference method was applied to simulate OBS survey. As in inversion, l2-norm was set as objective function. Further, the accurate computation of gradient direction was performed using the back-propagation technique and its scaling was done using the Pseudo-hessian matrix.In acoustic media, only Vp is used as the inversion parameter. In contrast, various sets of parameters, such as (Vp, Vs, ρ) and (λ, μ, ρ) can be used to define inversion in elastic media. Therefore, it is important to ascertain the parameter that gives the most accurate result for inversion with OBS data set.In this study, we generated Vp and Vs subsurface models by using (λ, μ, ρ) and (Vp, Vs, ρ) as inversion parameters in every iteration, and compared the final two FWI results.This research was supported by the Basic Research Project(17-3312) of the Korea Institute of

  19. Edge Modeling by Two Blur Parameters in Varying Contrasts.

    Science.gov (United States)

    Seo, Suyoung

    2018-06-01

    This paper presents a method of modeling edge profiles with two blur parameters, and estimating and predicting those edge parameters with varying brightness combinations and camera-to-object distances (COD). First, the validity of the edge model is proven mathematically. Then, it is proven experimentally with edges from a set of images captured for specifically designed target sheets and with edges from natural images. Estimation of the two blur parameters for each observed edge profile is performed with a brute-force method to find parameters that produce global minimum errors. Then, using the estimated blur parameters, actual blur parameters of edges with arbitrary brightness combinations are predicted using a surface interpolation method (i.e., kriging). The predicted surfaces show that the two blur parameters of the proposed edge model depend on both dark-side edge brightness and light-side edge brightness following a certain global trend. This is similar across varying CODs. The proposed edge model is compared with a one-blur parameter edge model using experiments of the root mean squared error for fitting the edge models to each observed edge profile. The comparison results suggest that the proposed edge model has superiority over the one-blur parameter edge model in most cases where edges have varying brightness combinations.

  20. Parameter identification in the logistic STAR model

    DEFF Research Database (Denmark)

    Ekner, Line Elvstrøm; Nejstgaard, Emil

    We propose a new and simple parametrization of the so-called speed of transition parameter of the logistic smooth transition autoregressive (LSTAR) model. The new parametrization highlights that a consequence of the well-known identification problem of the speed of transition parameter is that th......We propose a new and simple parametrization of the so-called speed of transition parameter of the logistic smooth transition autoregressive (LSTAR) model. The new parametrization highlights that a consequence of the well-known identification problem of the speed of transition parameter...

  1. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei

    2013-09-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  2. Fault Detection of Wind Turbines with Uncertain Parameters: A Set-Membership Approach

    Directory of Open Access Journals (Sweden)

    Thomas Bak

    2012-07-01

    Full Text Available In this paper a set-membership approach for fault detection of a benchmark wind turbine is proposed. The benchmark represents relevant fault scenarios in the control system, including sensor, actuator and system faults. In addition we also consider parameter uncertainties and uncertainties on the torque coefficient. High noise on the wind speed measurement, nonlinearities in the aerodynamic torque and uncertainties on the parameters make fault detection a challenging problem. We use an effective wind speed estimator to reduce the noise on the wind speed measurements. A set-membership approach is used generate a set that contains all states consistent with the past measurements and the given model of the wind turbine including uncertainties and noise. This set represents all possible states the system can be in if not faulty. If the current measurement is not consistent with this set, a fault is detected. For representation of these sets we use zonotopes and for modeling of uncertainties we use matrix zonotopes, which yields a computationally efficient algorithm. The method is applied to the wind turbine benchmark problem without and with uncertainties. The result demonstrates the effectiveness of the proposed method compared to other proposed methods applied to the same problem. An advantage of the proposed method is that there is no need for threshold design, and it does not produce positive false alarms. In the case where uncertainty on the torque lookup table is introduced, some faults are not detectable. Previous research has not addressed this uncertainty. The method proposed here requires equal or less detection time than previous results.

  3. Risk considerations for a long-term open-state of the radioactive waste storage facility Schacht Asse II. Variation of the parameter sets for radio-ecological modeling using the Monte Carlo method

    International Nuclear Information System (INIS)

    Kueppers, Christian; Ustohalova, Veronika

    2013-01-01

    The risk considerations for a long-term open-state of the radioactive waste storage facility Schacht Asse II include the following issues: description of radio-ecological models for the radionuclide transport in the covering rock formations and determination of the radiation exposure, parameters of the radio-ecological and their variability, Monte-Carlo method application. The results of the modeling calculations include the group short-living radionuclides, long-living radionuclides, radionuclides in the frame of decay chains and sensitivity analyses with respect to the correlation of input data and results.

  4. Application of lumped-parameter models

    Energy Technology Data Exchange (ETDEWEB)

    Ibsen, Lars Bo; Liingaard, M.

    2006-12-15

    This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil. Subsequently, the assembly of the dynamic stiffness matrix for the foundation is considered, and the solution for obtaining the steady state response, when using lumped-parameter models is given. (au)

  5. HEMODOSE: A Set of Multi-parameter Biodosimetry Tools

    Science.gov (United States)

    Hu, Shaowen; Blakely, William F.; Cucinotta, Francis A.

    2014-01-01

    After the events of September 11, 2001 and recent events at the Fukushima reactors in Japan, there is an increasing concern of the occurrence of nuclear and radiological terrorism or accidents that may result in large casualty in densely populated areas. To guide medical personnel in their clinical decisions for effective medical management and treatment of the exposed individuals, biological markers are usually applied to examine the radiation induced changes at different biological levels. Among these the peripheral blood cell counts are widely used to assess the extent of radiation induced injury. This is due to the fact that hematopoietic system is the most vulnerable part of the human body to radiation damage. Particularly, the lymphocyte, granulocyte, and platelet cells are the most radiosensitive of the blood elements, and monitoring their changes after exposure is regarded as the most practical and best laboratory test to estimate radiation dose. The HEMODOSE web tools are built upon solid physiological and pathophysiological understanding of mammalian hematopoietic systems, and rigorous coarse-grained biomathematical modeling and validation. Using single or serial granulocyte, lymphocyte, leukocyte, or platelet counts after exposure, these tools can estimate absorbed doses of adult victims very rapidly and accurately. Some patient data in historical accidents are utilized as examples to demonstrate the capabilities of these tools as a rapid point-of-care diagnostic or centralized high-throughput assay system in a large scale radiological disaster scenario. Unlike previous dose prediction algorithms, the HEMODOSE web tools establish robust correlations between the absorbed doses and victim's various types of blood cell counts not only in the early time window (1 or 2 days), but also in very late phase (up to 4 weeks) after exposure

  6. CHAMP: Changepoint Detection Using Approximate Model Parameters

    Science.gov (United States)

    2014-06-01

    form (with independent emissions or otherwise), in which parameter estimates are available via means such as maximum likelihood fit, MCMC , or sample ...counterparts, including the ability to generate a full posterior distribution over changepoint locations and offering a natural way to incorporate prior... sample consensus method. Our modifications also remove a significant restriction on model definition when detecting parameter changes within a single

  7. Parameters and error of a theoretical model

    International Nuclear Information System (INIS)

    Moeller, P.; Nix, J.R.; Swiatecki, W.

    1986-09-01

    We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs

  8. Allometric or lean body mass scaling of propofol pharmacokinetics: towards simplifying parameter sets for target-controlled infusions.

    Science.gov (United States)

    Coetzee, Johan Francois

    2012-03-01

    Uncertainty exists as to the most suitable pharmacokinetic parameter sets for propofol target-controlled infusions (TCI). The pharmacokinetic parameter sets currently employed are clearly not universally applicable, particularly when patient attributes differ from those of the subjects who participated in the original research from which the models were derived. Increasing evidence indicates that the pharmacokinetic parameters of propofol can be scaled allometrically as well as in direct proportion to lean body mass (LBM). Appraisal of hitherto published studies suggests that an allometrically scaled pharmacokinetic parameter set may be applicable to a wide range of patients ranging from children to obese adults. On the other hand, there is evidence that propofol pharmacokinetic parameters, scaled linearly to LBM, provide improved dosing in normal and obese adults. The 'Schnider' pharmacokinetic parameter set that has been programmed into commercially available TCI pumps cannot be employed at present for morbidly obese patients (body mass index >40 kg/m2), because of anomalous behaviour of the equation used to calculate LBM, resulting in administration of excessive amounts of propofol. Simulations of TCI using improved equations to calculate LBM indicate that the Schnider model delivers similar amounts of propofol to morbidly obese patients as do the allometrically scaled pharmacokinetic parameter sets. These hypotheses deserve further investigation. To facilitate further investigation, researchers are encouraged to make their data freely available to the WorldSIVA Open TCI Initiative (http://opentci.org).

  9. Application of lumped-parameter models

    DEFF Research Database (Denmark)

    Ibsen, Lars Bo; Liingaard, Morten

    This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil (section 1.1). Subse......This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil (section 1...

  10. Photosynthesis-irradiance parameters of marine phytoplankton: synthesis of a global data set

    Science.gov (United States)

    Bouman, Heather A.; Platt, Trevor; Doblin, Martina; Figueiras, Francisco G.; Gudmundsson, Kristinn; Gudfinnsson, Hafsteinn G.; Huang, Bangqin; Hickman, Anna; Hiscock, Michael; Jackson, Thomas; Lutz, Vivian A.; Mélin, Frédéric; Rey, Francisco; Pepin, Pierre; Segura, Valeria; Tilstone, Gavin H.; van Dongen-Vogels, Virginie; Sathyendranath, Shubha

    2018-02-01

    The photosynthetic performance of marine phytoplankton varies in response to a variety of factors, environmental and taxonomic. One of the aims of the MArine primary Production: model Parameters from Space (MAPPS) project of the European Space Agency is to assemble a global database of photosynthesis-irradiance (P-E) parameters from a range of oceanographic regimes as an aid to examining the basin-scale variability in the photophysiological response of marine phytoplankton and to use this information to improve the assignment of P-E parameters in the estimation of global marine primary production using satellite data. The MAPPS P-E database, which consists of over 5000 P-E experiments, provides information on the spatio-temporal variability in the two P-E parameters (the assimilation number, PmB, and the initial slope, αB, where the superscripts B indicate normalisation to concentration of chlorophyll) that are fundamental inputs for models (satellite-based and otherwise) of marine primary production that use chlorophyll as the state variable. Quality-control measures consisted of removing samples with abnormally high parameter values and flags were added to denote whether the spectral quality of the incubator lamp was used to calculate a broad-band value of αB. The MAPPS database provides a photophysiological data set that is unprecedented in number of observations and in spatial coverage. The database will be useful to a variety of research communities, including marine ecologists, biogeochemical modellers, remote-sensing scientists and algal physiologists. The compiled data are available at https://doi.org/10.1594/PANGAEA.874087" target="_blank">https://doi.org/10.1594/PANGAEA.874087 (Bouman et al., 2017).

  11. Optimal parameters for the FFA-Beddoes dynamic stall model

    Energy Technology Data Exchange (ETDEWEB)

    Bjoerck, A.; Mert, M. [FFA, The Aeronautical Research Institute of Sweden, Bromma (Sweden); Madsen, H.A. [Risoe National Lab., Roskilde (Denmark)

    1999-03-01

    Unsteady aerodynamic effects, like dynamic stall, must be considered in calculation of dynamic forces for wind turbines. Models incorporated in aero-elastic programs are of semi-empirical nature. Resulting aerodynamic forces therefore depend on values used for the semi-empiricial parameters. In this paper a study of finding appropriate parameters to use with the Beddoes-Leishman model is discussed. Minimisation of the `tracking error` between results from 2D wind tunnel tests and simulation with the model is used to find optimum values for the parameters. The resulting optimum parameters show a large variation from case to case. Using these different sets of optimum parameters in the calculation of blade vibrations, give rise to quite different predictions of aerodynamic damping which is discussed. (au)

  12. Incorporating model parameter uncertainty into inverse treatment planning

    International Nuclear Information System (INIS)

    Lian Jun; Xing Lei

    2004-01-01

    Radiobiological treatment planning depends not only on the accuracy of the models describing the dose-response relation of different tumors and normal tissues but also on the accuracy of tissue specific radiobiological parameters in these models. Whereas the general formalism remains the same, different sets of model parameters lead to different solutions and thus critically determine the final plan. Here we describe an inverse planning formalism with inclusion of model parameter uncertainties. This is made possible by using a statistical analysis-based frameset developed by our group. In this formalism, the uncertainties of model parameters, such as the parameter a that describes tissue-specific effect in the equivalent uniform dose (EUD) model, are expressed by probability density function and are included in the dose optimization process. We found that the final solution strongly depends on distribution functions of the model parameters. Considering that currently available models for computing biological effects of radiation are simplistic, and the clinical data used to derive the models are sparse and of questionable quality, the proposed technique provides us with an effective tool to minimize the effect caused by the uncertainties in a statistical sense. With the incorporation of the uncertainties, the technique has potential for us to maximally utilize the available radiobiology knowledge for better IMRT treatment

  13. Models and parameters for environmental radiological assessments

    Energy Technology Data Exchange (ETDEWEB)

    Miller, C W [ed.

    1984-01-01

    This book presents a unified compilation of models and parameters appropriate for assessing the impact of radioactive discharges to the environment. Models examined include those developed for the prediction of atmospheric and hydrologic transport and deposition, for terrestrial and aquatic food-chain bioaccumulation, and for internal and external dosimetry. Chapters have been entered separately into the data base. (ACR)

  14. Consistent Stochastic Modelling of Meteocean Design Parameters

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Sterndorff, M. J.

    2000-01-01

    Consistent stochastic models of metocean design parameters and their directional dependencies are essential for reliability assessment of offshore structures. In this paper a stochastic model for the annual maximum values of the significant wave height, and the associated wind velocity, current...

  15. Models and parameters for environmental radiological assessments

    International Nuclear Information System (INIS)

    Miller, C.W.

    1984-01-01

    This book presents a unified compilation of models and parameters appropriate for assessing the impact of radioactive discharges to the environment. Models examined include those developed for the prediction of atmospheric and hydrologic transport and deposition, for terrestrial and aquatic food-chain bioaccumulation, and for internal and external dosimetry. Chapters have been entered separately into the data base

  16. Improved image quality in computerised tomography with proper X-ray energy parameter settings

    International Nuclear Information System (INIS)

    Hammersberg, P.

    1995-01-01

    Image quality in Computerised Tomography (CT) depends strongly on the quality of the CT-projection data. CT projection data, in its turn, depend on sample composition and dimension, contrasting details within the sample and the equipment used, i.e. X-ray spectra, filtration, detector response, equipment geometry and CT parameters (such as number of projections, number of pixels, reconstruction filter, etc.). This work focuses on the problem of selecting the optimal physical parameters in order to maximize the signal-to-noise ratio in CT projection data (SNR CT ) between a contrasting detail and the surrounding material for CT-scanner equipped with poly-energetic X-ray sources (conventional X-ray tubes) and scintillator screen based detector systems (image intensifier and optical video chain). The first paper (I) presents the derivation and verification of a poly-energetic theoretical model for SNR CT . This model was used to find the tube potential setting yielding maximum SNR CT . It was shown that simplified calculations, which were valid for mono-energetic X-ray sources and/or photon counting detectors, do not correctly predict the optimal tube potential. The study also includes measurements of the actual X-ray source energy spectrum and photon transport Monte Carlo simulations of the response by the detector system. In the second paper (II) the model for SNR CT has been used with robust design engineering to find a setting of several control factors which maximizes the SNR CT and which was robust to the variation of type of contrasting detail. How the optimal settings of the control factors were affected of the exposure limits (i.e. defocusing) of the micro focal X-ray source was also investigated. The imaging control factors of interest were: tube potential, filter thickness, optical aperture and a X-ray attenuation equalization filter design of aluminium. 16 refs

  17. Design Galleries: A general approach to setting parameters for computer graphics and animation

    OpenAIRE

    Gibson, Sarah; Beardsley, Paul; Ruml, Wheeler; Kang, Thomas; Mirtich, Brian; Seims, Joshua; Freeman, William; Hodgins, Jessica; Pfister, Hanspeter; Marks, Joe; Andalman, Brad; Shieber, Stuart

    1997-01-01

    Image rendering maps scene parameters to output pixel values; animation maps motion-control parameters to trajectory values. Because these mapping functions are usually multidimensional, nonlinear, and discontinuous, finding input parameters that yield desirable output values is often a painful process of manual tweaking. Interactive evolution and inverse design are two general methodologies for computer-assisted parameter setting in which the computer plays a prominent role. In this paper we...

  18. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four

  19. Transformations among CE–CVM model parameters for ...

    Indian Academy of Sciences (India)

    In the development of thermodynamic databases for multicomponent systems using the cluster expansion–cluster variation methods, we need to have a consistent procedure for expressing the model parameters (CECs) of a higher order system in terms of those of the lower order subsystems and to an independent set of ...

  20. The level density parameters for fermi gas model

    International Nuclear Information System (INIS)

    Zuang Youxiang; Wang Cuilan; Zhou Chunmei; Su Zongdi

    1986-01-01

    Nuclear level densities are crucial ingredient in the statistical models, for instance, in the calculations of the widths, cross sections, emitted particle spectra, etc. for various reaction channels. In this work 667 sets of more reliable and new experimental data are adopted, which include average level spacing D D , radiative capture width Γ γ 0 at neutron binding energy and cumulative level number N 0 at the low excitation energy. They are published during 1973 to 1983. Based on the parameters given by Gilbert-Cameon and Cook the physical quantities mentioned above are calculated. The calculated results have the deviation obviously from experimental values. In order to improve the fitting, the parameters in the G-C formula are adjusted and new set of level density parameters is obsained. The parameters is this work are more suitable to fit new measurements

  1. A Set Theoretical Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    of it application on a social media maturity data-set. Specifically, we employ Necessary Condition Analysis (NCA) to identify maturity stage boundaries as necessary conditions and Qualitative Comparative Analysis (QCA) to arrive at multiple configurations that can be equally effective in progressing to higher......Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models...... characterized by equifinality, multiple conjunctural causation, and case diversity. We prescribe methodological guidelines consisting of a six-step procedure to systematically apply set theoretic methods to conceptualize, develop, and empirically derive maturity models and provide a demonstration...

  2. SPOTting model parameters using a ready-made Python package

    Science.gov (United States)

    Houska, Tobias; Kraft, Philipp; Breuer, Lutz

    2015-04-01

    The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for

  3. Advances in Modelling, System Identification and Parameter ...

    Indian Academy of Sciences (India)

    models determined from flight test data by using parameter estimation methods find extensive use in design/modification of flight control systems, high fidelity flight simulators and evaluation of handling qualitites of aircraft and rotorcraft. R K Mehra et al present new algorithms and results for flutter tests and adaptive notching ...

  4. A lumped parameter model of plasma focus

    International Nuclear Information System (INIS)

    Gonzalez, Jose H.; Florido, Pablo C.; Bruzzone, H.; Clausse, Alejandro

    1999-01-01

    A lumped parameter model to estimate neutron emission of a plasma focus (PF) device is developed. The dynamic of the current sheet is calculated using a snowplow model, and the neutron production with the thermal fusion cross section for a deuterium filling gas. The results were contrasted as a function of the filling pressure with experimental measurements of a 3.68 KJ Mather-type PF. (author)

  5. One parameter model potential for noble metals

    International Nuclear Information System (INIS)

    Idrees, M.; Khwaja, F.A.; Razmi, M.S.K.

    1981-08-01

    A phenomenological one parameter model potential which includes s-d hybridization and core-core exchange contributions is proposed for noble metals. A number of interesting properties like liquid metal resistivities, band gaps, thermoelectric powers and ion-ion interaction potentials are calculated for Cu, Ag and Au. The results obtained are in better agreement with experiment than the ones predicted by the other model potentials in the literature. (author)

  6. SPOTting Model Parameters Using a Ready-Made Python Package.

    Directory of Open Access Journals (Sweden)

    Tobias Houska

    Full Text Available The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool, an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI. We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  7. Sensitivity of the optimal parameter settings for a LTE packet scheduler

    NARCIS (Netherlands)

    Fernandez-Diaz, I.; Litjens, R.; van den Berg, C.A.; Dimitrova, D.C.; Spaey, K.

    Advanced packet scheduling schemes in 3G/3G+ mobile networks provide one or more parameters to optimise the trade-off between QoS and resource efficiency. In this paper we study the sensitivity of the optimal parameter setting for packet scheduling in LTE radio networks with respect to various

  8. Nonparametric Comparison of Two Dynamic Parameter Setting Methods in a Meta-Heuristic Approach

    Directory of Open Access Journals (Sweden)

    Seyhun HEPDOGAN

    2007-10-01

    Full Text Available Meta-heuristics are commonly used to solve combinatorial problems in practice. Many approaches provide very good quality solutions in a short amount of computational time; however most meta-heuristics use parameters to tune the performance of the meta-heuristic for particular problems and the selection of these parameters before solving the problem can require much time. This paper investigates the problem of setting parameters using a typical meta-heuristic called Meta-RaPS (Metaheuristic for Randomized Priority Search.. Meta-RaPS is a promising meta-heuristic optimization method that has been applied to different types of combinatorial optimization problems and achieved very good performance compared to other meta-heuristic techniques. To solve a combinatorial problem, Meta-RaPS uses two well-defined stages at each iteration: construction and local search. After a number of iterations, the best solution is reported. Meta-RaPS performance depends on the fine tuning of two main parameters, priority percentage and restriction percentage, which are used during the construction stage. This paper presents two different dynamic parameter setting methods for Meta-RaPS. These dynamic parameter setting approaches tune the parameters while a solution is being found. To compare these two approaches, nonparametric statistic approaches are utilized since the solutions are not normally distributed. Results from both these dynamic parameter setting methods are reported.

  9. Analisis Perbandingan Parameter Transformasi Antar Itrf Hasil Hitungan Kuadrat Terkecil Model Helmert 14-parameter Dengan Parameter Standar Iers

    OpenAIRE

    Fadly, Romi; Dewi, Citra

    2014-01-01

    This research aims to compare the 14 transformation parameters between ITRF from computation result using the Helmert 14-parameter models with IERS standard parameters. The transforma- tion parameters are calculated from the coordinates and velocities of ITRF05 to ITRF00 epoch 2000.00, and from ITRF08 to ITRF05 epoch 2005.00 for respectively transformation models. The transformation parameters are compared to the IERS standard parameters, then tested the signifi- cance of the d...

  10. Determining Relative Importance and Effective Settings for Genetic Algorithm Control Parameters.

    Science.gov (United States)

    Mills, K L; Filliben, J J; Haines, A L

    2015-01-01

    Setting the control parameters of a genetic algorithm to obtain good results is a long-standing problem. We define an experiment design and analysis method to determine relative importance and effective settings for control parameters of any evolutionary algorithm, and we apply this method to a classic binary-encoded genetic algorithm (GA). Subsequently, as reported elsewhere, we applied the GA, with the control parameter settings determined here, to steer a population of cloud-computing simulators toward behaviors that reveal degraded performance and system collapse. GA-steered simulators could serve as a design tool, empowering system engineers to identify and mitigate low-probability, costly failure scenarios. In the existing GA literature, we uncovered conflicting opinions and evidence regarding key GA control parameters and effective settings to adopt. Consequently, we designed and executed an experiment to determine relative importance and effective settings for seven GA control parameters, when applied across a set of numerical optimization problems drawn from the literature. This paper describes our experiment design, analysis, and results. We found that crossover most significantly influenced GA success, followed by mutation rate and population size and then by rerandomization point and elite selection. Selection method and the precision used within the chromosome to represent numerical values had least influence. Our findings are robust over 60 numerical optimization problems.

  11. Setting priorities in health care organizations: criteria, processes, and parameters of success

    Directory of Open Access Journals (Sweden)

    Martin Douglas K

    2004-09-01

    Full Text Available Abstract Background Hospitals and regional health authorities must set priorities in the face of resource constraints. Decision-makers seek practical ways to set priorities fairly in strategic planning, but find limited guidance from the literature. Very little has been reported from the perspective of Board members and senior managers about what criteria, processes and parameters of success they would use to set priorities fairly. Discussion We facilitated workshops for board members and senior leadership at three health care organizations to assist them in developing a strategy for fair priority setting. Workshop participants identified 8 priority setting criteria, 10 key priority setting process elements, and 6 parameters of success that they would use to set priorities in their organizations. Decision-makers in other organizations can draw lessons from these findings to enhance the fairness of their priority setting decision-making. Summary Lessons learned in three workshops fill an important gap in the literature about what criteria, processes, and parameters of success Board members and senior managers would use to set priorities fairly.

  12. Setting priorities in health care organizations: criteria, processes, and parameters of success.

    Science.gov (United States)

    Gibson, Jennifer L; Martin, Douglas K; Singer, Peter A

    2004-09-08

    Hospitals and regional health authorities must set priorities in the face of resource constraints. Decision-makers seek practical ways to set priorities fairly in strategic planning, but find limited guidance from the literature. Very little has been reported from the perspective of Board members and senior managers about what criteria, processes and parameters of success they would use to set priorities fairly. We facilitated workshops for board members and senior leadership at three health care organizations to assist them in developing a strategy for fair priority setting. Workshop participants identified 8 priority setting criteria, 10 key priority setting process elements, and 6 parameters of success that they would use to set priorities in their organizations. Decision-makers in other organizations can draw lessons from these findings to enhance the fairness of their priority setting decision-making. Lessons learned in three workshops fill an important gap in the literature about what criteria, processes, and parameters of success Board members and senior managers would use to set priorities fairly.

  13. Constant-parameter capture-recapture models

    Science.gov (United States)

    Brownie, C.; Hines, J.E.; Nichols, J.D.

    1986-01-01

    Jolly (1982, Biometrics 38, 301-321) presented modifications of the Jolly-Seber model for capture-recapture data, which assume constant survival and/or capture rates. Where appropriate, because of the reduced number of parameters, these models lead to more efficient estimators than the Jolly-Seber model. The tests to compare models given by Jolly do not make complete use of the data, and we present here the appropriate modifications, and also indicate how to carry out goodness-of-fit tests which utilize individual capture history information. We also describe analogous models for the case where young and adult animals are tagged. The availability of computer programs to perform the analysis is noted, and examples are given using output from these programs.

  14. Aqueous Electrolytes: Model Parameters and Process Simulation

    DEFF Research Database (Denmark)

    Thomsen, Kaj

    This thesis deals with aqueous electrolyte mixtures. The Extended UNIQUAC model is being used to describe the excess Gibbs energy of such solutions. Extended UNIQUAC parameters for the twelve ions Na+, K+, NH4+, H+, Cl-, NO3-, SO42-, HSO4-, OH-, CO32-, HCO3-, and S2O82- are estimated. A computer ...... program including a steady state process simulator for the design, simulation, and optimization of fractional crystallization processes is presented.......This thesis deals with aqueous electrolyte mixtures. The Extended UNIQUAC model is being used to describe the excess Gibbs energy of such solutions. Extended UNIQUAC parameters for the twelve ions Na+, K+, NH4+, H+, Cl-, NO3-, SO42-, HSO4-, OH-, CO32-, HCO3-, and S2O82- are estimated. A computer...

  15. Modelling tourists arrival using time varying parameter

    Science.gov (United States)

    Suciptawati, P.; Sukarsa, K. G.; Kencana, Eka N.

    2017-06-01

    The importance of tourism and its related sectors to support economic development and poverty reduction in many countries increase researchers’ attentions to study and model tourists’ arrival. This work is aimed to demonstrate time varying parameter (TVP) technique to model the arrival of Korean’s tourists to Bali. The number of Korean tourists whom visiting Bali for period January 2010 to December 2015 were used to model the number of Korean’s tourists to Bali (KOR) as dependent variable. The predictors are the exchange rate of Won to IDR (WON), the inflation rate in Korea (INFKR), and the inflation rate in Indonesia (INFID). Observing tourists visit to Bali tend to fluctuate by their nationality, then the model was built by applying TVP and its parameters were approximated using Kalman Filter algorithm. The results showed all of predictor variables (WON, INFKR, INFID) significantly affect KOR. For in-sample and out-of-sample forecast with ARIMA’s forecasted values for the predictors, TVP model gave mean absolute percentage error (MAPE) as much as 11.24 percent and 12.86 percent, respectively.

  16. Lumped Parameters Model of a Crescent Pump

    Directory of Open Access Journals (Sweden)

    Massimo Rundo

    2016-10-01

    Full Text Available This paper presents the lumped parameters model of an internal gear crescent pump with relief valve, able to estimate the steady-state flow-pressure characteristic and the pressure ripple. The approach is based on the identification of three variable control volumes regardless of the number of gear teeth. The model has been implemented in the commercial environment LMS Amesim with the development of customized components. Specific attention has been paid to the leakage passageways, some of them affected by the deformation of the cover plate under the action of the delivery pressure. The paper reports the finite element method analysis of the cover for the evaluation of the deflection and the validation through a contactless displacement transducer. Another aspect described in this study is represented by the computational fluid dynamics analysis of the relief valve, whose results have been used for tuning the lumped parameters model. Finally, the validation of the entire model of the pump is presented in terms of steady-state flow rate and of pressure oscillations.

  17. Quantitative evaluation of ozone and selected climate parameters in a set of EMAC simulations

    Directory of Open Access Journals (Sweden)

    M. Righi

    2015-03-01

    Full Text Available Four simulations with the ECHAM/MESSy Atmospheric Chemistry (EMAC model have been evaluated with the Earth System Model Validation Tool (ESMValTool to identify differences in simulated ozone and selected climate parameters that resulted from (i different setups of the EMAC model (nudged vs. free-running and (ii different boundary conditions (emissions, sea surface temperatures (SSTs and sea ice concentrations (SICs. To assess the relative performance of the simulations, quantitative performance metrics are calculated consistently for the climate parameters and ozone. This is important for the interpretation of the evaluation results since biases in climate can impact on biases in chemistry and vice versa. The observational data sets used for the evaluation include ozonesonde and aircraft data, meteorological reanalyses and satellite measurements. The results from a previous EMAC evaluation of a model simulation with nudging towards realistic meteorology in the troposphere have been compared to new simulations with different model setups and updated emission data sets in free-running time slice and nudged quasi chemistry-transport model (QCTM mode. The latter two configurations are particularly important for chemistry-climate projections and for the quantification of individual sources (e.g., the transport sector that lead to small chemical perturbations of the climate system, respectively. With the exception of some specific features which are detailed in this study, no large differences that could be related to the different setups (nudged vs. free-running of the EMAC simulations were found, which offers the possibility to evaluate and improve the overall model with the help of shorter nudged simulations. The main differences between the two setups is a better representation of the tropospheric and stratospheric temperature in the nudged simulations, which also better reproduce stratospheric water vapor concentrations, due to the improved

  18. Importance of hydrological parameters in contaminant transport modeling in a terrestrial environment

    International Nuclear Information System (INIS)

    Tsuduki, Katsunori; Matsunaga, Takeshi

    2007-01-01

    A grid type multi-layered distributed parameter model for calculating discharge in a watershed was described. Model verification with our field observation resulted in different sets of hydrological parameter values, all of which reproduced the observed discharge. The effect of those varied hydrological parameters on contaminant transport calculation was examined and discussed by simulation of event water transfer. (author)

  19. Particle filters for random set models

    CERN Document Server

    Ristic, Branko

    2013-01-01

    “Particle Filters for Random Set Models” presents coverage of state estimation of stochastic dynamic systems from noisy measurements, specifically sequential Bayesian estimation and nonlinear or stochastic filtering. The class of solutions presented in this book is based  on the Monte Carlo statistical method. The resulting  algorithms, known as particle filters, in the last decade have become one of the essential tools for stochastic filtering, with applications ranging from  navigation and autonomous vehicles to bio-informatics and finance. While particle filters have been around for more than a decade, the recent theoretical developments of sequential Bayesian estimation in the framework of random set theory have provided new opportunities which are not widely known and are covered in this book. These recent developments have dramatically widened the scope of applications, from single to multiple appearing/disappearing objects, from precise to imprecise measurements and measurement models. This book...

  20. Modeling of Parameters of Subcritical Assembly SAD

    CERN Document Server

    Petrochenkov, S; Puzynin, I

    2005-01-01

    The accepted conceptual design of the experimental Subcritical Assembly in Dubna (SAD) is based on the MOX core with a nominal unit capacity of 25 kW (thermal). This corresponds to the multiplication coefficient $k_{\\rm eff} =0.95$ and accelerator beam power 1 kW. A subcritical assembly driven with the existing 660 MeV proton accelerator at the Joint Institute for Nuclear Research has been modelled in order to make choice of the optimal parameters for the future experiments. The Monte Carlo method was used to simulate neutron spectra, energy deposition and doses calculations. Some of the calculation results are presented in the paper.

  1. Parameter estimation in fractional diffusion models

    CERN Document Server

    Kubilius, Kęstutis; Ralchenko, Kostiantyn

    2017-01-01

    This book is devoted to parameter estimation in diffusion models involving fractional Brownian motion and related processes. For many years now, standard Brownian motion has been (and still remains) a popular model of randomness used to investigate processes in the natural sciences, financial markets, and the economy. The substantial limitation in the use of stochastic diffusion models with Brownian motion is due to the fact that the motion has independent increments, and, therefore, the random noise it generates is “white,” i.e., uncorrelated. However, many processes in the natural sciences, computer networks and financial markets have long-term or short-term dependences, i.e., the correlations of random noise in these processes are non-zero, and slowly or rapidly decrease with time. In particular, models of financial markets demonstrate various kinds of memory and usually this memory is modeled by fractional Brownian diffusion. Therefore, the book constructs diffusion models with memory and provides s...

  2. Climate change decision-making: Model & parameter uncertainties explored

    Energy Technology Data Exchange (ETDEWEB)

    Dowlatabadi, H.; Kandlikar, M.; Linville, C.

    1995-12-31

    A critical aspect of climate change decision-making is uncertainties in current understanding of the socioeconomic, climatic and biogeochemical processes involved. Decision-making processes are much better informed if these uncertainties are characterized and their implications understood. Quantitative analysis of these uncertainties serve to inform decision makers about the likely outcome of policy initiatives, and help set priorities for research so that outcome ambiguities faced by the decision-makers are reduced. A family of integrated assessment models of climate change have been developed at Carnegie Mellon. These models are distinguished from other integrated assessment efforts in that they were designed from the outset to characterize and propagate parameter, model, value, and decision-rule uncertainties. The most recent of these models is ICAM 2.1. This model includes representation of the processes of demographics, economic activity, emissions, atmospheric chemistry, climate and sea level change and impacts from these changes and policies for emissions mitigation, and adaptation to change. The model has over 800 objects of which about one half are used to represent uncertainty. In this paper we show, that when considering parameter uncertainties, the relative contribution of climatic uncertainties are most important, followed by uncertainties in damage calculations, economic uncertainties and direct aerosol forcing uncertainties. When considering model structure uncertainties we find that the choice of policy is often dominated by model structure choice, rather than parameter uncertainties.

  3. Environmental Transport Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Wasiolek, M. A.

    2003-01-01

    developed in this report, and the related FEPs, are listed in Table 1-1. The relationship between the parameters and FEPs was based on a comparison of the parameter definition and the FEP descriptions as presented in BSC (2003 [160699], Section 6.2). The parameter values developed in this report support the biosphere model and are reflected in the TSPA through the biosphere dose conversion factors (BDCFs). Biosphere modeling focuses on radionuclides screened for the TSPA-LA (BSC 2002 [160059]). The same list of radionuclides is used in this analysis (Section 6.1.4). The analysis considers two human exposure scenarios (groundwater and volcanic ash) and climate change (Section 6.1.5). This analysis combines and revises two previous reports, ''Transfer Coefficient Analysis'' (CRWMS MandO 2000 [152435]) and ''Environmental Transport Parameter Analysis'' (CRWMS MandO 2001 [152434]), because the new ERMYN biosphere model requires a redefined set of input parameters. The scope of this analysis includes providing a technical basis for the selection of radionuclide- and element-specific biosphere parameters (except for Kd) that are important for calculating BDCFs based on the available radionuclide inventory abstraction data. The environmental transport parameter values were developed specifically for use in the biosphere model and may not be appropriate for other applications

  4. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-06-27

    ], Section 6.2). Parameter values developed in this report, and the related FEPs, are listed in Table 1-1. The relationship between the parameters and FEPs was based on a comparison of the parameter definition and the FEP descriptions as presented in BSC (2003 [160699], Section 6.2). The parameter values developed in this report support the biosphere model and are reflected in the TSPA through the biosphere dose conversion factors (BDCFs). Biosphere modeling focuses on radionuclides screened for the TSPA-LA (BSC 2002 [160059]). The same list of radionuclides is used in this analysis (Section 6.1.4). The analysis considers two human exposure scenarios (groundwater and volcanic ash) and climate change (Section 6.1.5). This analysis combines and revises two previous reports, ''Transfer Coefficient Analysis'' (CRWMS M&O 2000 [152435]) and ''Environmental Transport Parameter Analysis'' (CRWMS M&O 2001 [152434]), because the new ERMYN biosphere model requires a redefined set of input parameters. The scope of this analysis includes providing a technical basis for the selection of radionuclide- and element-specific biosphere parameters (except for Kd) that are important for calculating BDCFs based on the available radionuclide inventory abstraction data. The environmental transport parameter values were developed specifically for use in the biosphere model and may not be appropriate for other applications.

  5. On the role of modeling parameters in IMRT plan optimization

    International Nuclear Information System (INIS)

    Krause, Michael; Scherrer, Alexander; Thieke, Christian

    2008-01-01

    The formulation of optimization problems in intensity-modulated radiotherapy (IMRT) planning comprises the choice of various values such as function-specific parameters or constraint bounds. In current inverse planning programs that yield a single treatment plan for each optimization, it is often unclear how strongly these modeling parameters affect the resulting plan. This work investigates the mathematical concepts of elasticity and sensitivity to deal with this problem. An artificial planning case with a horse-shoe formed target with different opening angles surrounding a circular risk structure is studied. As evaluation functions the generalized equivalent uniform dose (EUD) and the average underdosage below and average overdosage beyond certain dose thresholds are used. A single IMRT plan is calculated for an exemplary parameter configuration. The elasticity and sensitivity of each parameter are then calculated without re-optimization, and the results are numerically verified. The results show the following. (1) elasticity can quantify the influence of a modeling parameter on the optimization result in terms of how strongly the objective function value varies under modifications of the parameter value. It also can describe how strongly the geometry of the involved planning structures affects the optimization result. (2) Based on the current parameter settings and corresponding treatment plan, sensitivity analysis can predict the optimization result for modified parameter values without re-optimization, and it can estimate the value intervals in which such predictions are valid. In conclusion, elasticity and sensitivity can provide helpful tools in inverse IMRT planning to identify the most critical parameters of an individual planning problem and to modify their values in an appropriate way

  6. Moose models with vanishing S parameter

    International Nuclear Information System (INIS)

    Casalbuoni, R.; De Curtis, S.; Dominici, D.

    2004-01-01

    In the linear moose framework, which naturally emerges in deconstruction models, we show that there is a unique solution for the vanishing of the S parameter at the lowest order in the weak interactions. We consider an effective gauge theory based on K SU(2) gauge groups, K+1 chiral fields, and electroweak groups SU(2) L and U(1) Y at the ends of the chain of the moose. S vanishes when a link in the moose chain is cut. As a consequence one has to introduce a dynamical nonlocal field connecting the two ends of the moose. Then the model acquires an additional custodial symmetry which protects this result. We examine also the possibility of a strong suppression of S through an exponential behavior of the link couplings as suggested by the Randall Sundrum metric

  7. Parameter Extraction for PSpice Models by means of an Automated Optimization Tool – An IGBT model Study Case

    DEFF Research Database (Denmark)

    Suárez, Carlos Gómez; Reigosa, Paula Diaz; Iannuzzo, Francesco

    2016-01-01

    An original tool for parameter extraction of PSpice models has been released, enabling a simple parameter identification. A physics-based IGBT model is used to demonstrate that the optimization tool is capable of generating a set of parameters which predicts the steady-state and switching behavior...

  8. Interactive 3-D Immersive Visualization for Analysis of Large Multi-Parameter Atmospheric Data Sets

    Science.gov (United States)

    Frenzer, J. B.; Hoell, J. M.; Holdzkom, J. J.; Jacob, D.; Fuelberg, H.; Avery, M.; Carmichael, G.; Hopkins, D. L.

    2001-12-01

    Significant improvements in the ability of atmospheric chemistry models to predict the transport and production of atmospheric constituents on regional and global scales have been realized over the past decade. Concurrent with the model improvements, has been an increase in the size and complexity of atmospheric observational data sets. As a result, the challenge to provide efficient and realistic visualization of atmospheric data "products" has increased dramatically. Over the past several years, personnel from the Atmospheric Sciences Data Center (ASDC) at NASA's Langley Research Center have explored the merits of visualizing atmospheric data products using interactive, immersive visualization hardware and software. As part of this activity, the Virtual Global Explorer and Observatory (vGeo) software, developed by VRCO, Inc., has been utilized to support the visual analysis of large multivariate data sets. The vGeo software provides an environment in which the user can create, view, navigate, and interact with data, models, and images in an immersive 3-D environment. The vGeo visualization capability was employed during the March/April 2001, NASA Global Tropospheric Experiment Transport and Chemical Evolution over the Pacific (TRACE-P) mission [(GTE) http://www-gte.larc.nasa.gov] to support day-to-day flight-planning activities through the creation of virtual 3-D worlds containing modeled data and proposed aircraft flight paths. The GTE, a major activity within NASA's Earth Science Enterprise, is primarily an aircraft-based measurement program, supplemented by ground-based measurements and satellite observations, focused on understanding the impact of human activity on the global troposphere. The TRACE-P is the most recent campaign conducted by GTE and was deployed to Hong Kong and then to the Yokota Airbase, Japan. TRACE-P is the third in a series of GTE field campaigns in the northwestern Pacific region to understand the chemical composition of air masses

  9. Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model.

    Science.gov (United States)

    Laury, Marie L; Wang, Lee-Ping; Pande, Vijay S; Head-Gordon, Teresa; Ponder, Jay W

    2015-07-23

    A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. An automated procedure, ForceBalance, is used to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimental data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The AMOEBA14 model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures from 249 to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to experimental properties as a function of temperature, including the second virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient, and dielectric constant. The viscosity, self-diffusion constant, and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2-20 water molecules, the AMOEBA14 model yields results similar to AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model.

  10. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models

    KAUST Repository

    Kalligiannaki, Evangelia

    2015-01-07

    We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics and finding the optimal parameter set for which the relative entropy rate with respect to the atomistic dynamics is minimized. The minimization problem leads to a generalization of the force matching methods to non equilibrium systems. A multiplicative noise example reveals the importance of the diffusion coefficient in the optimization problem.

  11. Modeling extreme events: Sample fraction adaptive choice in parameter estimation

    Science.gov (United States)

    Neves, Manuela; Gomes, Ivette; Figueiredo, Fernanda; Gomes, Dora Prata

    2012-09-01

    When modeling extreme events there are a few primordial parameters, among which we refer the extreme value index and the extremal index. The extreme value index measures the right tail-weight of the underlying distribution and the extremal index characterizes the degree of local dependence in the extremes of a stationary sequence. Most of the semi-parametric estimators of these parameters show the same type of behaviour: nice asymptotic properties, but a high variance for small values of k, the number of upper order statistics to be used in the estimation, and a high bias for large values of k. This shows a real need for the choice of k. Choosing some well-known estimators of those parameters we revisit the application of a heuristic algorithm for the adaptive choice of k. The procedure is applied to some simulated samples as well as to some real data sets.

  12. Prediction of interest rate using CKLS model with stochastic parameters

    Energy Technology Data Exchange (ETDEWEB)

    Ying, Khor Chia [Faculty of Computing and Informatics, Multimedia University, Jalan Multimedia, 63100 Cyberjaya, Selangor (Malaysia); Hin, Pooi Ah [Sunway University Business School, No. 5, Jalan Universiti, Bandar Sunway, 47500 Subang Jaya, Selangor (Malaysia)

    2014-06-19

    The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ{sup (j)} of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ{sup (j)}, we assume that φ{sup (j)} depends on φ{sup (j−m)}, φ{sup (j−m+1)},…, φ{sup (j−1)} and the interest rate r{sub j+n} at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r{sub j+n+1} of the interest rate at the next time point when the value r{sub j+n} of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r{sub j+n+d} at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.

  13. Model parameters estimation and sensitivity by genetic algorithms

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca

    2003-01-01

    In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The

  14. COMPARING INTRA- AND INTERENVIRONMENTAL PARAMETERS OF OPTIMAL SETTING IN BREEDING EXPERIMENTS

    Directory of Open Access Journals (Sweden)

    Domagoj Šimić

    2004-06-01

    Full Text Available A series of biometrical and quantitative-genetic parameters, not well known in Croatia, are being used for the most important agronomic traits to determine optimal genotype setting within a location as well as among locations. Objectives of the study are to estimate and to compare 1 parameters of intra-environment setting (effective mean square error EMSE, in lattice design, relative efficiency RE, of lattice design LD, compared to randomized complete block design RCBD, and repeatability Rep, of a plot value, and 2 operative heritability h2, as a parameter of inter-environment setting in an experiment with 72 maize hybrids. Trials were set up at four environments (two locations in two years evaluating grain yield and stalk rot. EMSE values corresponded across environments for both traits, while the estimations for RE of LD varied inconsistently over environments and traits. Rep estimates were more different over environments than traits. Rep values did not correspond with h2 estimates: Rep estimates for stalk rot were higher than those for grain yield, while h2 for grain yield was higher than for stalk rot in all instances. Our results suggest that due to importance of genotype × environment interaction, there is a need for multienvironment trials for both traits. If the experiment framework should be reduced due to economic or other reasons, decreasing number of locations in a year rather than decreasing number of years of investigation is recommended.

  15. Influence of Weaving Loom Setting Parameters on Changes of Woven Fabric Structure and Mechanical Properties

    Directory of Open Access Journals (Sweden)

    Aušra ADOMAITIENĖ

    2011-11-01

    Full Text Available During the manufacturing of fabric of different raw material there was noticed, that after removing the fabric from weaving loom and after stabilization of fabric structure, the changes of parameters of fabric structure are not regular. During this investigation it was analysed, how weaving loom technological parameters (heald cross moment and initial tension of warp should be chosen and how to predict the changes of fabric structure parameters and its mechanical properties. The dependencies of changes of half-wool fabric structure parameters (weft setting, fabric thickness and projections of fabric cross-section and mechanical properties (breaking force, elongation at break, static friction force and static friction coefficient on weaving loom setting parameters (heald cross moment and initial warp tension were analysed. The orthogonal Box plan of two factors was used, the 3-D dependencies were drawn, and empirical equations of these dependencies were established.http://dx.doi.org/10.5755/j01.ms.17.4.780

  16. Testing for parameter instability across different modeling frameworks

    NARCIS (Netherlands)

    Calvori, Francesco; Creal, Drew; Koopman, Siem Jan; Lucas, André

    2017-01-01

    We develop a new parameter instability test that generalizes the seminal ARCHLagrange Multiplier test of Engle (1982) for a constant variance against the alternative of autoregressive conditional heteroskedasticity to settings with nonlinear timevarying parameters and non-Gaussian distributions. We

  17. DETERMINATION OF RELATIONAL CLASSIFICATION AMONG HULL FORM PARAMETERS AND SHIP MOTIONS PERFORMANCE FOR A SET OF SMALL VESSELS

    Directory of Open Access Journals (Sweden)

    Ayla Sayli

    2016-08-01

    Full Text Available Data science for engineers is the most recent research area which suggests to analyse large data sets in order to find data analytics and use them for better designing and modelling. Ship design practice reveals that conceptual ship design is critically important for a successful basic design. Conceptual ship design needs to identify the true set of design variables influencing vessel performance and costs to define the best possible basic design by the use of performance prediction model. This model can be constructed by design engineers. The main idea of this paper comes from this crucial idea to determine relational classification of a set of small vessels using their hull form parameters and performance characteristics defined by transfer functions of heave and pitch motions and of absolute vertical acceleration, by our in-house software application based on K-Means algorithm from data mining. This application is implemented in the C# programming language on Microsoft SQL Server database. We also use the Elbow method to estimate the true number of clusters for K-Means algorithm. The computational results show that the considered set of small vessels can be clustered in three categories according to their functional relations of their hull form parameters and transfer functions considering all cases of three loading conditions, seven ship speeds as non-dimensional Froude numbers (Fn and nine wave-length to ship-length values (λ/L.

  18. Dengue human infection model performance parameters.

    Science.gov (United States)

    Endy, Timothy P

    2014-06-15

    Dengue is a global health problem and of concern to travelers and deploying military personnel with development and licensure of an effective tetravalent dengue vaccine a public health priority. The dengue viruses (DENVs) are mosquito-borne flaviviruses transmitted by infected Aedes mosquitoes. Illness manifests across a clinical spectrum with severe disease characterized by intravascular volume depletion and hemorrhage. DENV illness results from a complex interaction of viral properties and host immune responses. Dengue vaccine development efforts are challenged by immunologic complexity, lack of an adequate animal model of disease, absence of an immune correlate of protection, and only partially informative immunogenicity assays. A dengue human infection model (DHIM) will be an essential tool in developing potential dengue vaccines or antivirals. The potential performance parameters needed for a DHIM to support vaccine or antiviral candidates are discussed. © The Author 2014. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Dimensionality reduction of RKHS model parameters.

    Science.gov (United States)

    Taouali, Okba; Elaissi, Ilyes; Messaoud, Hassani

    2015-07-01

    This paper proposes a new method to reduce the parameter number of models developed in the Reproducing Kernel Hilbert Space (RKHS). In fact, this number is equal to the number of observations used in the learning phase which is assumed to be high. The proposed method entitled Reduced Kernel Partial Least Square (RKPLS) consists on approximating the retained latent components determined using the Kernel Partial Least Square (KPLS) method by their closest observation vectors. The paper proposes the design and the comparative study of the proposed RKPLS method and the Support Vector Machines on Regression (SVR) technique. The proposed method is applied to identify a nonlinear Process Trainer PT326 which is a physical process available in our laboratory. Moreover as a thermal process with large time response may help record easily effective observations which contribute to model identification. Compared to the SVR technique, the results from the proposed RKPLS method are satisfactory. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Space geodetic techniques for global modeling of ionospheric peak parameters

    Science.gov (United States)

    Alizadeh, M. Mahdi; Schuh, Harald; Schmidt, Michael

    The rapid development of new technological systems for navigation, telecommunication, and space missions which transmit signals through the Earth’s upper atmosphere - the ionosphere - makes the necessity of precise, reliable and near real-time models of the ionospheric parameters more crucial. In the last decades space geodetic techniques have turned into a capable tool for measuring ionospheric parameters in terms of Total Electron Content (TEC) or the electron density. Among these systems, the current space geodetic techniques, such as Global Navigation Satellite Systems (GNSS), Low Earth Orbiting (LEO) satellites, satellite altimetry missions, and others have found several applications in a broad range of commercial and scientific fields. This paper aims at the development of a three-dimensional integrated model of the ionosphere, by using various space geodetic techniques and applying a combination procedure for computation of the global model of electron density. In order to model ionosphere in 3D, electron density is represented as a function of maximum electron density (NmF2), and its corresponding height (hmF2). NmF2 and hmF2 are then modeled in longitude, latitude, and height using two sets of spherical harmonic expansions with degree and order 15. To perform the estimation, GNSS input data are simulated in such a way that the true position of the satellites are detected and used, but the STEC values are obtained through a simulation procedure, using the IGS VTEC maps. After simulating the input data, the a priori values required for the estimation procedure are calculated using the IRI-2012 model and also by applying the ray-tracing technique. The estimated results are compared with F2-peak parameters derived from the IRI model to assess the least-square estimation procedure and moreover, to validate the developed maps, the results are compared with the raw F2-peak parameters derived from the Formosat-3/Cosmic data.

  1. Investigation of land use effects on Nash model parameters

    Science.gov (United States)

    Niazi, Faegheh; Fakheri Fard, Ahmad; Nourani, Vahid; Goodrich, David; Gupta, Hoshin

    2015-04-01

    Flood forecasting is of great importance in hydrologic planning, hydraulic structure design, water resources management and sustainable designs like flood control and management. Nash's instantaneous unit hydrograph is frequently used for simulating hydrological response in natural watersheds. Urban hydrology is gaining more attention due to population increases and associated construction escalation. Rapid development of urban areas affects the hydrologic processes of watersheds by decreasing soil permeability, flood base flow, lag time and increase in flood volume, peak runoff rates and flood frequency. In this study the influence of urbanization on the significant parameters of the Nash model have been investigated. These parameters were calculated using three popular methods (i.e. moment, root mean square error and random sampling data generation), in a small watershed consisting of one natural sub-watershed which drains into a residentially developed sub-watershed in the city of Sierra Vista, Arizona. The results indicated that for all three methods, the lag time, which is product of Nash parameters "K" and "n", in the natural sub-watershed is greater than the developed one. This logically implies more storage and/or attenuation in the natural sub-watershed. The median K and n parameters derived from the three methods using calibration events were tested via a set of verification events. The results indicated that all the three method have acceptable accuracy in hydrograph simulation. The CDF curves and histograms of the parameters clearly show the difference of the Nash parameter values between the natural and developed sub-watersheds. Some specific upper and lower percentile values of the median of the generated parameters (i.e. 10, 20 and 30 %) were analyzed to future investigates the derived parameters. The model was sensitive to variations in the value of the uncertain K and n parameter. Changes in n are smaller than K in both sub-watersheds indicating

  2. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. A. Wasiolek

    2003-01-01

    This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air inhaled by a receptor. Concentrations in air to which the

  3. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-09-24

    This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air

  4. Model parameter learning using Kullback-Leibler divergence

    Science.gov (United States)

    Lin, Chungwei; Marks, Tim K.; Pajovic, Milutin; Watanabe, Shinji; Tung, Chih-kuan

    2018-02-01

    In this paper, we address the following problem: For a given set of spin configurations whose probability distribution is of the Boltzmann type, how do we determine the model coupling parameters? We demonstrate that directly minimizing the Kullback-Leibler divergence is an efficient method. We test this method against the Ising and XY models on the one-dimensional (1D) and two-dimensional (2D) lattices, and provide two estimators to quantify the model quality. We apply this method to two types of problems. First, we apply it to the real-space renormalization group (RG). We find that the obtained RG flow is sufficiently good for determining the phase boundary (within 1% of the exact result) and the critical point, but not accurate enough for critical exponents. The proposed method provides a simple way to numerically estimate amplitudes of the interactions typically truncated in the real-space RG procedure. Second, we apply this method to the dynamical system composed of self-propelled particles, where we extract the parameter of a statistical model (a generalized XY model) from a dynamical system described by the Viscek model. We are able to obtain reasonable coupling values corresponding to different noise strengths of the Viscek model. Our method is thus able to provide quantitative analysis of dynamical systems composed of self-propelled particles.

  5. Computer vision based method and system for online measurement of geometric parameters of train wheel sets.

    Science.gov (United States)

    Zhang, Zhi-Feng; Gao, Zhan; Liu, Yuan-Yuan; Jiang, Feng-Chun; Yang, Yan-Li; Ren, Yu-Fen; Yang, Hong-Jun; Yang, Kun; Zhang, Xiao-Dong

    2012-01-01

    Train wheel sets must be periodically inspected for possible or actual premature failures and it is very significant to record the wear history for the full life of utilization of wheel sets. This means that an online measuring system could be of great benefit to overall process control. An online non-contact method for measuring a wheel set's geometric parameters based on the opto-electronic measuring technique is presented in this paper. A charge coupled device (CCD) camera with a selected optical lens and a frame grabber was used to capture the image of the light profile of the wheel set illuminated by a linear laser. The analogue signals of the image were transformed into corresponding digital grey level values. The 'mapping function method' is used to transform an image pixel coordinate to a space coordinate. The images of wheel sets were captured when the train passed through the measuring system. The rim inside thickness and flange thickness were measured and analyzed. The spatial resolution of the whole image capturing system is about 0.33 mm. Theoretic and experimental results show that the online measurement system based on computer vision can meet wheel set measurement requirements.

  6. Modelled basic parameters for semi-industrial irradiation plant design

    International Nuclear Information System (INIS)

    Mangussi, J.

    2009-01-01

    The basic parameters of an irradiation plant design are the total activity, the product uniformity ratio and the efficiency process. The target density, the minimum dose required and the throughput depends on the use to which the irradiator will be put at. In this work, a model for calculating the specific dose rate at several depths in an infinite homogeneous medium produced by a slab source irradiator is presented. The product minimum dose rate for a set of target thickness is obtained. The design method steps are detailed and an illustrative example is presented. (author)

  7. Empirically modelled Pc3 activity based on solar wind parameters

    Directory of Open Access Journals (Sweden)

    B. Heilig

    2010-09-01

    Full Text Available It is known that under certain solar wind (SW/interplanetary magnetic field (IMF conditions (e.g. high SW speed, low cone angle the occurrence of ground-level Pc3–4 pulsations is more likely. In this paper we demonstrate that in the event of anomalously low SW particle density, Pc3 activity is extremely low regardless of otherwise favourable SW speed and cone angle. We re-investigate the SW control of Pc3 pulsation activity through a statistical analysis and two empirical models with emphasis on the influence of SW density on Pc3 activity. We utilise SW and IMF measurements from the OMNI project and ground-based magnetometer measurements from the MM100 array to relate SW and IMF measurements to the occurrence of Pc3 activity. Multiple linear regression and artificial neural network models are used in iterative processes in order to identify sets of SW-based input parameters, which optimally reproduce a set of Pc3 activity data. The inclusion of SW density in the parameter set significantly improves the models. Not only the density itself, but other density related parameters, such as the dynamic pressure of the SW, or the standoff distance of the magnetopause work equally well in the model. The disappearance of Pc3s during low-density events can have at least four reasons according to the existing upstream wave theory: 1. Pausing the ion-cyclotron resonance that generates the upstream ultra low frequency waves in the absence of protons, 2. Weakening of the bow shock that implies less efficient reflection, 3. The SW becomes sub-Alfvénic and hence it is not able to sweep back the waves propagating upstream with the Alfvén-speed, and 4. The increase of the standoff distance of the magnetopause (and of the bow shock. Although the models cannot account for the lack of Pc3s during intervals when the SW density is extremely low, the resulting sets of optimal model inputs support the generation of mid latitude Pc3 activity predominantly through

  8. Mass balance model parameter transferability on a tropical glacier

    Science.gov (United States)

    Gurgiser, Wolfgang; Mölg, Thomas; Nicholson, Lindsey; Kaser, Georg

    2013-04-01

    The mass balance and melt water production of glaciers is of particular interest in the Peruvian Andes where glacier melt water has markedly increased water supply during the pronounced dry seasons in recent decades. However, the melt water contribution from glaciers is projected to decrease with appreciable negative impacts on the local society within the coming decades. Understanding mass balance processes on tropical glaciers is a prerequisite for modeling present and future glacier runoff. As a first step towards this aim we applied a process-based surface mass balance model in order to calculate observed ablation at two stakes in the ablation zone of Shallap Glacier (4800 m a.s.l., 9°S) in the Cordillera Blanca, Peru. Under the tropical climate, the snow line migrates very frequently across most of the ablation zone all year round causing large temporal and spatial variations of glacier surface conditions and related ablation. Consequently, pronounced differences between the two chosen stakes and the two years were observed. Hourly records of temperature, humidity, wind speed, short wave incoming radiation, and precipitation are available from an automatic weather station (AWS) on the moraine near the glacier for the hydrological years 2006/07 and 2007/08 while stake readings are available at intervals of between 14 to 64 days. To optimize model parameters, we used 1000 model simulations in which the most sensitive model parameters were varied randomly within their physically meaningful ranges. The modeled surface height change was evaluated against the two stake locations in the lower ablation zone (SH11, 4760m) and in the upper ablation zone (SH22, 4816m), respectively. The optimal parameter set for each point achieved good model skill but if we transfer the best parameter combination from one stake site to the other stake site model errors increases significantly. The same happens if we optimize the model parameters for each year individually and transfer

  9. On the control of distributed parameter systems using a multidimensional systems setting

    Czech Academy of Sciences Publication Activity Database

    Cichy, B.; Augusta, Petr; Rogers, E.; Galkowski, K.; Hurák, Z.

    2008-01-01

    Roč. 22, č. 7 (2008), s. 1566-1581 ISSN 0888-3270 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10750506 Keywords : Distributed parameter systems * Modelling * Control law design Subject RIV: BC - Control Systems Theory Impact factor: 1.984, year: 2008

  10. Online measurement for geometrical parameters of wheel set based on structure light and CUDA parallel processing

    Science.gov (United States)

    Wu, Kaihua; Shao, Zhencheng; Chen, Nian; Wang, Wenjie

    2018-01-01

    The wearing degree of the wheel set tread is one of the main factors that influence the safety and stability of running train. Geometrical parameters mainly include flange thickness and flange height. Line structure laser light was projected on the wheel tread surface. The geometrical parameters can be deduced from the profile image. An online image acquisition system was designed based on asynchronous reset of CCD and CUDA parallel processing unit. The image acquisition was fulfilled by hardware interrupt mode. A high efficiency parallel segmentation algorithm based on CUDA was proposed. The algorithm firstly divides the image into smaller squares, and extracts the squares of the target by fusion of k_means and STING clustering image segmentation algorithm. Segmentation time is less than 0.97ms. A considerable acceleration ratio compared with the CPU serial calculation was obtained, which greatly improved the real-time image processing capacity. When wheel set was running in a limited speed, the system placed alone railway line can measure the geometrical parameters automatically. The maximum measuring speed is 120km/h.

  11. Effect of Collaborative Recommender System Parameters: Common Set Cardinality and the Similarity Measure

    Directory of Open Access Journals (Sweden)

    Mohammad Yahya H. Al-Shamri

    2016-01-01

    Full Text Available Recommender systems are widespread due to their ability to help Web users surf the Internet in a personalized way. For example, collaborative recommender system is a powerful Web personalization tool for suggesting many useful items to a given user based on opinions collected from his neighbors. Among many, similarity measure is an important factor affecting the performance of the collaborative recommender system. However, the similarity measure itself largely depends on the overlapping between the user profiles. Most of the previous systems are tested on a predefined number of common items and neighbors. However, the system performance may vary if we changed these parameters. The main aim of this paper is to examine the performance of the collaborative recommender system under many similarity measures, common set cardinalities, rating mean groups, and neighborhood set sizes. For this purpose, we propose a modified version for the mean difference weight similarity measure and a new evaluation metric called users’ coverage for measuring the recommender system ability for helping users. The experimental results show that the modified mean difference weight similarity measure outperforms other similarity measures and the collaborative recommender system performance varies by varying its parameters; hence we must specify the system parameters in advance.

  12. Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model

    Science.gov (United States)

    Custer, Michael

    2015-01-01

    This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…

  13. Models for estimating photosynthesis parameters from in situ production profiles

    Science.gov (United States)

    Kovač, Žarko; Platt, Trevor; Sathyendranath, Shubha; Antunović, Suzana

    2017-12-01

    The rate of carbon assimilation in phytoplankton primary production models is mathematically prescribed with photosynthesis irradiance functions, which convert a light flux (energy) into a material flux (carbon). Information on this rate is contained in photosynthesis parameters: the initial slope and the assimilation number. The exactness of parameter values is crucial for precise calculation of primary production. Here we use a model of the daily production profile based on a suite of photosynthesis irradiance functions and extract photosynthesis parameters from in situ measured daily production profiles at the Hawaii Ocean Time-series station Aloha. For each function we recover parameter values, establish parameter distributions and quantify model skill. We observe that the choice of the photosynthesis irradiance function to estimate the photosynthesis parameters affects the magnitudes of parameter values as recovered from in situ profiles. We also tackle the problem of parameter exchange amongst the models and the effect it has on model performance. All models displayed little or no bias prior to parameter exchange, but significant bias following parameter exchange. The best model performance resulted from using optimal parameter values. Model formulation was extended further by accounting for spectral effects and deriving a spectral analytical solution for the daily production profile. The daily production profile was also formulated with time dependent growing biomass governed by a growth equation. The work on parameter recovery was further extended by exploring how to extract photosynthesis parameters from information on watercolumn production. It was demonstrated how to estimate parameter values based on a linearization of the full analytical solution for normalized watercolumn production and from the solution itself, without linearization. The paper complements previous works on photosynthesis irradiance models by analysing the skill and consistency of

  14. Benchmark data set for wheat growth models

    DEFF Research Database (Denmark)

    Asseng, S; Ewert, F.; Martre, P

    2015-01-01

    The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation...

  15. Hydrological modeling in alpine catchments: sensing the critical parameters towards an efficient model calibration.

    Science.gov (United States)

    Achleitner, S; Rinderer, M; Kirnbauer, R

    2009-01-01

    For the Tyrolean part of the river Inn, a hybrid model for flood forecast has been set up and is currently in its test phase. The system is a hybrid system which comprises of a hydraulic 1D model for the river Inn, and the hydrological models HQsim (Rainfall-runoff-discharge model) and the snow and ice melt model SES for modeling the rainfall runoff form non-glaciated and glaciated tributary catchment respectively. Within this paper the focus is put on the hydrological modeling of the totally 49 connected non-glaciated catchments realized with the software HQsim. In the course of model calibration, the identification of the most sensitive parameters is important aiming at an efficient calibration procedure. The indicators used for explaining the parameter sensitivities were chosen specifically for the purpose of flood forecasting. Finally five model parameters could be identified as being sensitive for model calibration when aiming for a well calibrated model for flood conditions. In addition two parameters were identified which are sensitive in situations where the snow line plays an important role.

  16. Preference Mining Using Neighborhood Rough Set Model on Two Universes.

    Science.gov (United States)

    Zeng, Kai

    2016-01-01

    Preference mining plays an important role in e-commerce and video websites for enhancing user satisfaction and loyalty. Some classical methods are not available for the cold-start problem when the user or the item is new. In this paper, we propose a new model, called parametric neighborhood rough set on two universes (NRSTU), to describe the user and item data structures. Furthermore, the neighborhood lower approximation operator is used for defining the preference rules. Then, we provide the means for recommending items to users by using these rules. Finally, we give an experimental example to show the details of NRSTU-based preference mining for cold-start problem. The parameters of the model are also discussed. The experimental results show that the proposed method presents an effective solution for preference mining. In particular, NRSTU improves the recommendation accuracy by about 19% compared to the traditional method.

  17. Identifying a standard set of outcome parameters for the evaluation of orthogeriatric co-management for hip fractures.

    Science.gov (United States)

    Liem, I S; Kammerlander, C; Suhm, N; Blauth, M; Roth, T; Gosch, M; Hoang-Kim, A; Mendelson, D; Zuckerman, J; Leung, F; Burton, J; Moran, C; Parker, M; Giusti, A; Pioli, G; Goldhahn, J; Kates, S L

    2013-11-01

    Osteoporotic fractures are an increasing problem in the world due to the ageing of the population. Different models of orthogeriatric co-management are currently in use worldwide. These models differ for instance by the health-care professional who has the responsibility for care in the acute and early rehabilitation phases. There is no international consensus regarding the best model of care and which outcome parameters should be used to evaluate these models. The goal of this project was to identify which outcome parameters and assessment tools should be used to measure and compare outcome changes that can be made by the implementation of orthogeriatric co-management models and to develop recommendations about how and when these outcome parameters should be measured. It was not the purpose of this study to describe items that might have an impact on the outcome but cannot be influenced such as age, co-morbidities and cognitive impairment at admission. Based on a review of the literature on existing orthogeriatric co-management evaluation studies, 14 outcome parameters were evaluated and discussed in a 2-day meeting with panellists. These panellists were selected based on research and/or clinical expertise in hip fracture management and a common interest in measuring outcome in hip fracture care. We defined 12 objective and subjective outcome parameters and how they should be measured: mortality, length of stay, time to surgery, complications, re-admission rate, mobility, quality of life, pain, activities of daily living, medication use, place of residence and costs. We could not recommend an appropriate tool to measure patients' satisfaction and falls. We defined the time points at which these outcome parameters should be collected to be at admission and discharge, 30 days, 90 days and 1 year after admission. Twelve objective and patient-reported outcome parameters were selected to form a standard set for the measurement of influenceable outcome of patients

  18. Parameters extraction for the one-diode model of a solar cell

    Science.gov (United States)

    Sabadus, Andreea; Mihailetchi, Valentin; Paulescu, Marius

    2017-12-01

    This paper is focused on the numerical algorithms for solving the one-diode model of a crystalline solar cell. Numerical experiments show that, generally, the algorithms reproduce accurately the I-V characteristics while the modeled parameters (the diode saturation current, serial resistance and the diode ideality factor) experience a large dispersion. The question arising here is: which is the correct set of the modeled parameters? In order to address this issue, the extracted parameters are compared with the measured ones for a silicon solar cell produced at ISC Konstanz. An attempt to solve numerically the one-diode model for accurate parameters extraction is discussed.

  19. Flare parameters inferred from a 3D loop model database

    Science.gov (United States)

    Cuambe, Valente A.; Costa, J. E. R.; Simões, P. J. A.

    2018-04-01

    We developed a database of pre-calculated flare images and spectra exploring a set of parameters which describe the physical characteristics of coronal loops and accelerated electron distribution. Due to the large number of parameters involved in describing the geometry and the flaring atmosphere in the model used (Costa et al. 2013), we built a large database of models (˜250 000) to facilitate the flare analysis. The geometry and characteristics of non-thermal electrons are defined on a discrete grid with spatial resolution greater than 4 arcsec. The database was constructed based on general properties of known solar flares and convolved with instrumental resolution to replicate the observations from the Nobeyama radio polarimeter (NoRP) spectra and Nobeyama radio-heliograph (NoRH) brightness maps. Observed spectra and brightness distribution maps are easily compared with the modelled spectra and images in the database, indicating a possible range of solutions. The parameter search efficiency in this finite database is discussed. Eight out of ten parameters analysed for one thousand simulated flare searches were recovered with a relative error of less than 20 per cent on average. In addition, from the analysis of the observed correlation between NoRH flare sizes and intensities at 17 GHz, some statistical properties were derived. From these statistics the energy spectral index was found to be δ ˜ 3, with non-thermal electron densities showing a peak distribution ⪅107 cm-3, and Bphotosphere ⪆2000 G. Some bias for larger loops with heights as great as ˜2.6 × 109 cm, and looptop events were noted. An excellent match of the spectrum and the brightness distribution at 17 and 34 GHz of the 2002 May 31 flare, is presented as well.

  20. Exploring parameter constraints on quintessential dark energy: The exponential model

    International Nuclear Information System (INIS)

    Bozek, Brandon; Abrahamse, Augusta; Albrecht, Andreas; Barnard, Michael

    2008-01-01

    We present an analysis of a scalar field model of dark energy with an exponential potential using the Dark Energy Task Force (DETF) simulated data models. Using Markov Chain Monte Carlo sampling techniques we examine the ability of each simulated data set to constrain the parameter space of the exponential potential for data sets based on a cosmological constant and a specific exponential scalar field model. We compare our results with the constraining power calculated by the DETF using their 'w 0 -w a ' parametrization of the dark energy. We find that respective increases in constraining power from one stage to the next produced by our analysis give results consistent with DETF results. To further investigate the potential impact of future experiments, we also generate simulated data for an exponential model background cosmology which cannot be distinguished from a cosmological constant at DETF 'Stage 2', and show that for this cosmology good DETF Stage 4 data would exclude a cosmological constant by better than 3σ

  1. Temporal variation and scaling of parameters for a monthly hydrologic model

    Science.gov (United States)

    Deng, Chao; Liu, Pan; Wang, Dingbao; Wang, Weiguang

    2018-03-01

    The temporal variation of model parameters is affected by the catchment conditions and has a significant impact on hydrological simulation. This study aims to evaluate the seasonality and downscaling of model parameter across time scales based on monthly and mean annual water balance models with a common model framework. Two parameters of the monthly model, i.e., k and m, are assumed to be time-variant at different months. Based on the hydrological data set from 121 MOPEX catchments in the United States, we firstly analyzed the correlation between parameters (k and m) and catchment properties (NDVI and frequency of rainfall events, α). The results show that parameter k is positively correlated with NDVI or α, while the correlation is opposite for parameter m, indicating that precipitation and vegetation affect monthly water balance by controlling temporal variation of parameters k and m. The multiple linear regression is then used to fit the relationship between ε and the means and coefficient of variations of parameters k and m. Based on the empirical equation and the correlations between the time-variant parameters and NDVI, the mean annual parameter ε is downscaled to monthly k and m. The results show that it has lower NSEs than these from model with time-variant k and m being calibrated through SCE-UA, while for several study catchments, it has higher NSEs than that of the model with constant parameters. The proposed method is feasible and provides a useful tool for temporal scaling of model parameter.

  2. Study on Parameters Modeling of Wind Turbines Using SCADA Data

    Directory of Open Access Journals (Sweden)

    Yonglong YAN

    2014-08-01

    Full Text Available Taking the advantage of the current massive monitoring data from Supervisory Control and Data Acquisition (SCADA system of wind farm, it is of important significance for anomaly detection, early warning and fault diagnosis to build the data model of state parameters of wind turbines (WTs. The operational conditions and the relationships between the state parameters of wind turbines are complex. It is difficult to establish the model of state parameter accurately, and the modeling method of state parameters of wind turbines considering parameter selection is proposed. Firstly, by analyzing the characteristic of SCADA data, a reasonable range of data and monitoring parameters are chosen. Secondly, neural network algorithm is adapted, and the selection method of input parameters in the model is presented. Generator bearing temperature and cooling air temperature are regarded as target parameters, and the two models are built and input parameters of the models are selected, respectively. Finally, the parameter selection method in this paper and the method using genetic algorithm-partial least square (GA-PLS are analyzed comparatively, and the results show that the proposed methods are correct and effective. Furthermore, the modeling of two parameters illustrate that the method in this paper can applied to other state parameters of wind turbines.

  3. Assessment of structural model and parameter uncertainty with a multi-model system for soil water balance models

    Science.gov (United States)

    Michalik, Thomas; Multsch, Sebastian; Frede, Hans-Georg; Breuer, Lutz

    2016-04-01

    Water for agriculture is strongly limited in arid and semi-arid regions and often of low quality in terms of salinity. The application of saline waters for irrigation increases the salt load in the rooting zone and has to be managed by leaching to maintain a healthy soil, i.e. to wash out salts by additional irrigation. Dynamic simulation models are helpful tools to calculate the root zone water fluxes and soil salinity content in order to investigate best management practices. However, there is little information on structural and parameter uncertainty for simulations regarding the water and salt balance of saline irrigation. Hence, we established a multi-model system with four different models (AquaCrop, RZWQM, SWAP, Hydrus1D/UNSATCHEM) to analyze the structural and parameter uncertainty by using the Global Likelihood and Uncertainty Estimation (GLUE) method. Hydrus1D/UNSATCHEM and SWAP were set up with multiple sets of different implemented functions (e.g. matric and osmotic stress for root water uptake) which results in a broad range of different model structures. The simulations were evaluated against soil water and salinity content observations. The posterior distribution of the GLUE analysis gives behavioral parameters sets and reveals uncertainty intervals for parameter uncertainty. Throughout all of the model sets, most parameters accounting for the soil water balance show a low uncertainty, only one or two out of five to six parameters in each model set displays a high uncertainty (e.g. pore-size distribution index in SWAP and Hydrus1D/UNSATCHEM). The differences between the models and model setups reveal the structural uncertainty. The highest structural uncertainty is observed for deep percolation fluxes between the model sets of Hydrus1D/UNSATCHEM (~200 mm) and RZWQM (~500 mm) that are more than twice as high for the latter. The model sets show a high variation in uncertainty intervals for deep percolation as well, with an interquartile range (IQR) of

  4. Catchment classification and model parameter transfer with a view to regionalisation

    Science.gov (United States)

    Ley, Rita; Hellebrand, Hugo; Casper, Markus C.

    2013-04-01

    Physiographic and climatic catchment characteristics are responsible for catchment response behaviour, whereas hydrological model parameters describe catchment properties in such a way to transform input data (here: precipitation, evaporation) to runoff, hence describing the response behaviour of a catchment. In this respect, model parameters can thus be seen as catchment descriptors. A third catchment descriptor is runoff behaviour, depicted by indices derived from event runoff coefficients and Flow Duration Curves. In an ongoing research project founded by the Deutsche Forschungsgemeinschaft (DFG), we investigate the interdependencies of these three catchment descriptors for catchment classification with a view to regionalisation. The study area comprises about 80 meso-scale catchments in western Germany. These catchments are classified by Self Organising Maps (SOM) based on a) runoff behaviour and b) physical and climatic properties. The two classifications show an overlap of about 80% for all catchments and indicate a direct connection between the two descriptors for a majority of the catchments. Next, all catchments are calibrated with a simple and parsimonious conceptual model, stemming from the Superflex model framework. In this study we test the interdependencies between the classification and the calibrated model parameters by parameter transfer within and between the classes established by SOM. The model simulates total discharge, given observed precipitation and pre-estimated potential evaporation. Simulations with a few catchments show encouraging results: all simulations with the calibrated model show a good fit, which is indicated by Nash Sutcliff coefficients of about 0.8. Most of the simulations of runoff time series for catchments with parameter sets belonging to their own class display good performances too, while simulated runoff with model parameter sets from other classes display significant lower performance. This indicates that there is a

  5. Modelling occupants’ heating set-point prefferences

    DEFF Research Database (Denmark)

    Andersen, Rune Vinther; Olesen, Bjarne W.; Toftum, Jørn

    2011-01-01

    Discrepancies between simulated and actual occupant behaviour can offset the actual energy consumption by several orders of magnitude compared to simulation results. Thus, there is a need to set up guidelines to increase the reliability of forecasts of environmental conditions and energy consumpt...

  6. Parameter and Uncertainty Estimation in Groundwater Modelling

    DEFF Research Database (Denmark)

    Jensen, Jacob Birk

    The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... was applied.Capture zone modelling was conducted on a synthetic stationary 3-dimensional flow problem involving river, surface and groundwater flow. Simulated capture zones were illustrated as likelihood maps and compared with a deterministic capture zones derived from a reference model. The results showed...

  7. WINKLER'S SINGLE-PARAMETER SUBGRADE MODEL FROM ...

    African Journals Online (AJOL)

    Preferred Customer

    SUBGRADE MODELING. Asrat Worku. Department of ... The models give consistently larger stiffness for the Winkler springs as compared to previously proposed similar continuum-based models that ignore the lateral stresses. ...... (ν = 0.25 and E = 40MPa); (b) a medium stiff clay (ν = 0.45 and E = 50MPa). In contrast to this, ...

  8. Bayesian analysis of inflation: Parameter estimation for single field models

    International Nuclear Information System (INIS)

    Mortonson, Michael J.; Peiris, Hiranya V.; Easther, Richard

    2011-01-01

    Future astrophysical data sets promise to strengthen constraints on models of inflation, and extracting these constraints requires methods and tools commensurate with the quality of the data. In this paper we describe ModeCode, a new, publicly available code that computes the primordial scalar and tensor power spectra for single-field inflationary models. ModeCode solves the inflationary mode equations numerically, avoiding the slow roll approximation. It is interfaced with CAMB and CosmoMC to compute cosmic microwave background angular power spectra and perform likelihood analysis and parameter estimation. ModeCode is easily extendable to additional models of inflation, and future updates will include Bayesian model comparison. Errors from ModeCode contribute negligibly to the error budget for analyses of data from Planck or other next generation experiments. We constrain representative single-field models (φ n with n=2/3, 1, 2, and 4, natural inflation, and 'hilltop' inflation) using current data, and provide forecasts for Planck. From current data, we obtain weak but nontrivial limits on the post-inflationary physics, which is a significant source of uncertainty in the predictions of inflationary models, while we find that Planck will dramatically improve these constraints. In particular, Planck will link the inflationary dynamics with the post-inflationary growth of the horizon, and thus begin to probe the ''primordial dark ages'' between TeV and grand unified theory scale energies.

  9. An Investigation of Invariance Properties of One, Two and Three Parameter Logistic Item Response Theory Models

    Directory of Open Access Journals (Sweden)

    O.A. Awopeju

    2017-12-01

    Full Text Available The study investigated the invariance properties of one, two and three parame-ter logistic item response theory models. It examined the best fit among one parameter logistic (1PL, two-parameter logistic (2PL and three-parameter logistic (3PL IRT models for SSCE, 2008 in Mathematics. It also investigated the degree of invariance of the IRT models based item difficulty parameter estimates in SSCE in Mathematics across different samples of examinees and examined the degree of invariance of the IRT models based item discrimination estimates in SSCE in Mathematics across different samples of examinees. In order to achieve the set objectives, 6000 students (3000 males and 3000 females were drawn from the population of 35262 who wrote the 2008 paper 1 Senior Secondary Certificate Examination (SSCE in Mathematics organized by National Examination Council (NECO. The item difficulty and item discrimination parameter estimates from CTT and IRT were tested for invariance using BLOG MG 3 and correlation analysis was achieved using SPSS version 20. The research findings were that two parameter model IRT item difficulty and discrimination parameter estimates exhibited invariance property consistently across different samples and that 2-parameter model was suitable for all samples of examinees unlike one-parameter model and 3-parameter model.

  10. process setting models for the minimization of costs defectives

    African Journals Online (AJOL)

    Dr Obe

    2. Optimal Setting Process Models. 2.1 Optimal setting of process mean in the case of one-sided limit. In filling operation, the process average net weight must be set. The standards prescribe the minimum weight which is printed on the packet. This set of quality control problems has one-sided limit (the minimum net weight).

  11. Design Parameters for Evaluating Light Settings and Light Atmosphere in Hospital Wards

    DEFF Research Database (Denmark)

    Stidsen, Lone; Kirkegaard, Poul Henning; Fisker, Anna Marie

    2010-01-01

    of staff and guests in the future hospital. This paper is based on Böhmes G. concept of atmosphere dealing with the effect of light in experiencing atmosphere, and the importance having a holistic approach when designing a pleasurable light atmosphere. It shows important design parameters for pleasurable......When constructing and designing Danish hospitals for the future, patients, staff and guests are in focus. It is found important to have a starting point in healing architecture and create an environment with knowledge of users sensory and functionally needs and looks at how hospital wards can...... light atmosphere in hospital wards and specific present a proposal for evaluating light atmosphere in the dynamic light settings for hospital wards....

  12. Identifying the connective strength between model parameters and performance criteria

    Directory of Open Access Journals (Sweden)

    B. Guse

    2017-11-01

    Full Text Available In hydrological models, parameters are used to represent the time-invariant characteristics of catchments and to capture different aspects of hydrological response. Hence, model parameters need to be identified based on their role in controlling the hydrological behaviour. For the identification of meaningful parameter values, multiple and complementary performance criteria are used that compare modelled and measured discharge time series. The reliability of the identification of hydrologically meaningful model parameter values depends on how distinctly a model parameter can be assigned to one of the performance criteria. To investigate this, we introduce the new concept of connective strength between model parameters and performance criteria. The connective strength assesses the intensity in the interrelationship between model parameters and performance criteria in a bijective way. In our analysis of connective strength, model simulations are carried out based on a latin hypercube sampling. Ten performance criteria including Nash–Sutcliffe efficiency (NSE, Kling–Gupta efficiency (KGE and its three components (alpha, beta and r as well as RSR (the ratio of the root mean square error to the standard deviation for different segments of the flow duration curve (FDC are calculated. With a joint analysis of two regression tree (RT approaches, we derive how a model parameter is connected to different performance criteria. At first, RTs are constructed using each performance criterion as the target variable to detect the most relevant model parameters for each performance criterion. Secondly, RTs are constructed using each parameter as the target variable to detect which performance criteria are impacted by changes in the values of one distinct model parameter. Based on this, appropriate performance criteria are identified for each model parameter. In this study, a high bijective connective strength between model parameters and performance criteria

  13. Analysis of the spatial variation in the parameters of the SWAT model with application in Flanders, Northern Belgium

    Directory of Open Access Journals (Sweden)

    G. Heuvelmans

    2004-01-01

    Full Text Available Operational applications of a hydrological model often require the prediction of stream flow in (future time periods without stream flow observations or in ungauged catchments. Data for a case-specific optimisation of model parameters are not available for such applications, so parameters have to be derived from other catchments or time periods. It has been demonstrated that for applications of the SWAT in Northern Belgium, temporal transfers of the parameters have less influence than spatial transfers on the performance of the model. This study examines the spatial variation in parameter optima in more detail. The aim was to delineate zones wherein model parameters can be transferred without a significant loss of model performance. SWAT was calibrated for 25 catchments that are part of eight larger sub-basins of the Scheldt river basin. Two approaches are discussed for grouping these units in zones with a uniform set of parameters: a single parameter approach considering each parameter separately and a parameter set approach evaluating the parameterisation as a whole. For every catchment, the SWAT model was run with the local parameter optima, with the average parameter values for the entire study region (Flanders, with the zones delineated with the single parameter approach and with the zones obtained by the parameter set approach. Comparison of the model performances of these four parameterisation strategies indicates that both the single parameter and the parameter set zones lead to stream flow predictions that are more accurate than if the entire study region were treated as one single zone. On the other hand, the use of zonal average parameter values results in a considerably worse model fit compared to local parameter optima. Clustering of parameter sets gives a more accurate result than the single parameter approach and is, therefore, the preferred technique for use in the parameterisation of ungauged sub-catchments as part of the

  14. A New Five-Parameter Fréchet Model for Extreme Values

    Directory of Open Access Journals (Sweden)

    Muhammad Ahsan ul Haq

    2017-09-01

    Full Text Available A new five parameter Fréchet model for Extreme Values was proposed and studied. Various mathematical properties including moments, quantiles, and moment generating function were derived. Incomplete moments and probability weighted moments were also obtained. The maximum likelihood method was used to estimate the model parameters. The flexibility of the derived model was accessed using two real data set applications.

  15. Annotation-based feature extraction from sets of SBML models.

    Science.gov (United States)

    Alm, Rebekka; Waltemath, Dagmar; Wolfien, Markus; Wolkenhauer, Olaf; Henkel, Ron

    2015-01-01

    Model repositories such as BioModels Database provide computational models of biological systems for the scientific community. These models contain rich semantic annotations that link model entities to concepts in well-established bio-ontologies such as Gene Ontology. Consequently, thematically similar models are likely to share similar annotations. Based on this assumption, we argue that semantic annotations are a suitable tool to characterize sets of models. These characteristics improve model classification, allow to identify additional features for model retrieval tasks, and enable the comparison of sets of models. In this paper we discuss four methods for annotation-based feature extraction from model sets. We tested all methods on sets of models in SBML format which were composed from BioModels Database. To characterize each of these sets, we analyzed and extracted concepts from three frequently used ontologies, namely Gene Ontology, ChEBI and SBO. We find that three out of the methods are suitable to determine characteristic features for arbitrary sets of models: The selected features vary depending on the underlying model set, and they are also specific to the chosen model set. We show that the identified features map on concepts that are higher up in the hierarchy of the ontologies than the concepts used for model annotations. Our analysis also reveals that the information content of concepts in ontologies and their usage for model annotation do not correlate. Annotation-based feature extraction enables the comparison of model sets, as opposed to existing methods for model-to-keyword comparison, or model-to-model comparison.

  16. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models.

    Directory of Open Access Journals (Sweden)

    Jonathan R Karr

    2015-05-01

    Full Text Available Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.

  17. Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model

    DEFF Research Database (Denmark)

    Åberg, Andreas; Widd, Anders; Abildskov, Jens

    2016-01-01

    A challenge during the development of models for simulation of the automotive Selective Catalytic Reduction catalyst is the parameter estimation of the kinetic parameters, which can be time consuming and problematic. The parameter estimation is often carried out on small-scale reactor tests...

  18. An evolutionary computing approach for parameter estimation investigation of a model for cholera.

    Science.gov (United States)

    Akman, Olcay; Schaefer, Elsa

    2015-01-01

    We consider the problem of using time-series data to inform a corresponding deterministic model and introduce the concept of genetic algorithms (GA) as a tool for parameter estimation, providing instructions for an implementation of the method that does not require access to special toolboxes or software. We give as an example a model for cholera, a disease for which there is much mechanistic uncertainty in the literature. We use GA to find parameter sets using available time-series data from the introduction of cholera in Haiti and we discuss the value of comparing multiple parameter sets with similar performances in describing the data.

  19. INFLUENCE OF PROCESS PARAMETERS ON DIMENSIONAL ACCURACY OF PARTS MANUFACTURED USING FUSED DEPOSITION MODELLING TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    Filip Górski

    2013-09-01

    Full Text Available The paper presents the results of experimental study – part of research of additive technology using thermoplastics as a build material, namely Fused Deposition Modelling (FDM. Aim of the study was to identify the relation between basic parameter of the FDM process – model orientation during manufacturing – and a dimensional accuracy and repeatability of obtained products. A set of samples was prepared – they were manufactured with variable process parameters and they were measured using 3D scanner. Significant differences in accuracy of products of the same geometry, but manufactured with different set of process parameters were observed.

  20. Presenting a Model for Setting in Narrative Fiction Illustration

    Directory of Open Access Journals (Sweden)

    Hajar Salimi Namin

    2017-12-01

    Full Text Available The present research aims at presenting a model for evaluating and enhancing training the setting in illustration for narrative fictions for undergraduate students of graphic design who are weak in setting. The research utilized expert’s opinions through a survey. The designed model was submitted to eight experts, and their opinions were used to have the model adjusted and improved. Used as research instruments were notes, materials in text books, papers, and related websites, as well as questionnaires. Results indicated that, for evaluating and enhancing the level of training the setting in illustration for narrative fiction to students, one needs to extract sub-indexes of setting. Moreover, definition and recognition of the model of setting helps undergraduate students of graphic design enhance the level of setting in their works skill by recognizing details of setting. Accordingly, it is recommended to design training packages to enhance these sub-indexes and hence improve the setting for narrative fiction illustration.

  1. NASA Workshop on Distributed Parameter Modeling and Control of Flexible Aerospace Systems

    Science.gov (United States)

    Marks, Virginia B. (Compiler); Keckler, Claude R. (Compiler)

    1994-01-01

    Although significant advances have been made in modeling and controlling flexible systems, there remains a need for improvements in model accuracy and in control performance. The finite element models of flexible systems are unduly complex and are almost intractable to optimum parameter estimation for refinement using experimental data. Distributed parameter or continuum modeling offers some advantages and some challenges in both modeling and control. Continuum models often result in a significantly reduced number of model parameters, thereby enabling optimum parameter estimation. The dynamic equations of motion of continuum models provide the advantage of allowing the embedding of the control system dynamics, thus forming a complete set of system dynamics. There is also increased insight provided by the continuum model approach.

  2. Estimation of Parameters in Latent Class Models with Constraints on the Parameters.

    Science.gov (United States)

    Paulson, James A.

    This paper reviews the application of the EM Algorithm to marginal maximum likelihood estimation of parameters in the latent class model and extends the algorithm to the case where there are monotone homogeneity constraints on the item parameters. It is shown that the EM algorithm can be used to obtain marginal maximum likelihood estimates of the…

  3. Hybrid Compensatory-Noncompensatory Choice Sets in Semicompensatory Models

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Bekhor, Shlomo; Shiftan, Yoram

    2013-01-01

    Semicompensatory models represent a choice process consisting of an elimination-based choice set formation on satisfaction of criterion thresholds and a utility-based choice. Current semicompensatory models assume a purely noncompensatory choice set formation and therefore do not support...... multinomial criteria that involve trade-offs between attributes at the choice set formation stage. This study proposes a novel behavioral paradigm consisting of a hybrid compensatory-noncompensatory choice set formation process, followed by compensatory choice. The behavioral paradigm is represented...

  4. Bankruptcy prediction using SVM models with a new approach to combine features selection and parameter optimisation

    Science.gov (United States)

    Zhou, Ligang; Keung Lai, Kin; Yen, Jerome

    2014-03-01

    Due to the economic significance of bankruptcy prediction of companies for financial institutions, investors and governments, many quantitative methods have been used to develop effective prediction models. Support vector machine (SVM), a powerful classification method, has been used for this task; however, the performance of SVM is sensitive to model form, parameter setting and features selection. In this study, a new approach based on direct search and features ranking technology is proposed to optimise features selection and parameter setting for 1-norm and least-squares SVM models for bankruptcy prediction. This approach is also compared to the SVM models with parameter optimisation and features selection by the popular genetic algorithm technique. The experimental results on a data set with 2010 instances show that the proposed models are good alternatives for bankruptcy prediction.

  5. Standard fire behavior fuel models: a comprehensive set for use with Rothermel's surface fire spread model

    Science.gov (United States)

    Joe H. Scott; Robert E. Burgan

    2005-01-01

    This report describes a new set of standard fire behavior fuel models for use with Rothermel's surface fire spread model and the relationship of the new set to the original set of 13 fire behavior fuel models. To assist with transition to using the new fuel models, a fuel model selection guide, fuel model crosswalk, and set of fuel model photos are provided.

  6. Diabatic models with transferrable parameters for generalized chemical reactions

    Science.gov (United States)

    Reimers, Jeffrey R.; McKemmish, Laura K.; McKenzie, Ross H.; Hush, Noel S.

    2017-05-01

    Diabatic models applied to adiabatic electron-transfer theory yield many equations involving just a few parameters that connect ground-state geometries and vibration frequencies to excited-state transition energies and vibration frequencies to the rate constants for electron-transfer reactions, utilizing properties of the conical-intersection seam linking the ground and excited states through the Pseudo Jahn-Teller effect. We review how such simplicity in basic understanding can also be obtained for general chemical reactions. The key feature that must be recognized is that electron-transfer (or hole transfer) processes typically involve one electron (hole) moving between two orbitals, whereas general reactions typically involve two electrons or even four electrons for processes in aromatic molecules. Each additional moving electron leads to new high-energy but interrelated conical-intersection seams that distort the shape of the critical lowest-energy seam. Recognizing this feature shows how conical-intersection descriptors can be transferred between systems, and how general chemical reactions can be compared using the same set of simple parameters. Mathematical relationships are presented depicting how different conical-intersection seams relate to each other, showing that complex problems can be reduced into an effective interaction between the ground-state and a critical excited state to provide the first semi-quantitative implementation of Shaik’s “twin state” concept. Applications are made (i) demonstrating why the chemistry of the first-row elements is qualitatively so different to that of the second and later rows, (ii) deducing the bond-length alternation in hypothetical cyclohexatriene from the observed UV spectroscopy of benzene, (iii) demonstrating that commonly used procedures for modelling surface hopping based on inclusion of only the first-derivative correction to the Born-Oppenheimer approximation are valid in no region of the chemical

  7. Incremental parameter estimation of kinetic metabolic network models

    Directory of Open Access Journals (Sweden)

    Jia Gengjie

    2012-11-01

    Full Text Available Abstract Background An efficient and reliable parameter estimation method is essential for the creation of biological models using ordinary differential equation (ODE. Most of the existing estimation methods involve finding the global minimum of data fitting residuals over the entire parameter space simultaneously. Unfortunately, the associated computational requirement often becomes prohibitively high due to the large number of parameters and the lack of complete parameter identifiability (i.e. not all parameters can be uniquely identified. Results In this work, an incremental approach was applied to the parameter estimation of ODE models from concentration time profiles. Particularly, the method was developed to address a commonly encountered circumstance in the modeling of metabolic networks, where the number of metabolic fluxes (reaction rates exceeds that of metabolites (chemical species. Here, the minimization of model residuals was performed over a subset of the parameter space that is associated with the degrees of freedom in the dynamic flux estimation from the concentration time-slopes. The efficacy of this method was demonstrated using two generalized mass action (GMA models, where the method significantly outperformed single-step estimations. In addition, an extension of the estimation method to handle missing data is also presented. Conclusions The proposed incremental estimation method is able to tackle the issue on the lack of complete parameter identifiability and to significantly reduce the computational efforts in estimating model parameters, which will facilitate kinetic modeling of genome-scale cellular metabolism in the future.

  8. An approach to adjustment of relativistic mean field model parameters

    Directory of Open Access Journals (Sweden)

    Bayram Tuncay

    2017-01-01

    Full Text Available The Relativistic Mean Field (RMF model with a small number of adjusted parameters is powerful tool for correct predictions of various ground-state nuclear properties of nuclei. Its success for describing nuclear properties of nuclei is directly related with adjustment of its parameters by using experimental data. In the present study, the Artificial Neural Network (ANN method which mimics brain functionality has been employed for improvement of the RMF model parameters. In particular, the understanding capability of the ANN method for relations between the RMF model parameters and their predictions for binding energies (BEs of 58Ni and 208Pb have been found in agreement with the literature values.

  9. A simulation of water pollution model parameter estimation

    Science.gov (United States)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  10. Lumped parameter models for the interpretation of environmental tracer data

    International Nuclear Information System (INIS)

    Maloszewski, P.; Zuber, A.

    1996-01-01

    Principles of the lumped-parameter approach to the interpretation of environmental tracer data are given. The following models are considered: the piston flow model (PFM), exponential flow model (EM), linear model (LM), combined piston flow and exponential flow model (EPM), combined linear flow and piston flow model (LPM), and dispersion model (DM). The applicability of these models for the interpretation of different tracer data is discussed for a steady state flow approximation. Case studies are given to exemplify the applicability of the lumped-parameter approach. Description of a user-friendly computer program is given. (author). 68 refs, 25 figs, 4 tabs

  11. Modeling Complex Equilibria in ITC Experiments: Thermodynamic Parameters Estimation for a Three Binding Site Model

    Science.gov (United States)

    Le, Vu H.; Buscaglia, Robert; Chaires, Jonathan B.; Lewis, Edwin A.

    2013-01-01

    Isothermal Titration Calorimetry, ITC, is a powerful technique that can be used to estimate a complete set of thermodynamic parameters (e.g. Keq (or ΔG), ΔH, ΔS, and n) for a ligand binding interaction described by a thermodynamic model. Thermodynamic models are constructed by combination of equilibrium constant, mass balance, and charge balance equations for the system under study. Commercial ITC instruments are supplied with software that includes a number of simple interaction models, for example one binding site, two binding sites, sequential sites, and n-independent binding sites. More complex models for example, three or more binding sites, one site with multiple binding mechanisms, linked equilibria, or equilibria involving macromolecular conformational selection through ligand binding need to be developed on a case by case basis by the ITC user. In this paper we provide an algorithm (and a link to our MATLAB program) for the non-linear regression analysis of a multiple binding site model with up to four overlapping binding equilibria. Error analysis demonstrates that fitting ITC data for multiple parameters (e.g. up to nine parameters in the three binding site model) yields thermodynamic parameters with acceptable accuracy. PMID:23262283

  12. A test for the parameters of multiple linear regression models ...

    African Journals Online (AJOL)

    A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...

  13. WATGIS: A GIS-Based Lumped Parameter Water Quality Model

    Science.gov (United States)

    Glenn P. Fernandez; George M. Chescheir; R. Wayne Skaggs; Devendra M. Amatya

    2002-01-01

    A Geographic Information System (GIS)­based, lumped parameter water quality model was developed to estimate the spatial and temporal nitrogen­loading patterns for lower coastal plain watersheds in eastern North Carolina. The model uses a spatially distributed delivery ratio (DR) parameter to account for nitrogen retention or loss along a drainage network. Delivery...

  14. Bayesian estimation of parameters in a regional hydrological model

    Directory of Open Access Journals (Sweden)

    K. Engeland

    2002-01-01

    Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis

  15. Brownian motion model with stochastic parameters for asset prices

    Science.gov (United States)

    Ching, Soo Huei; Hin, Pooi Ah

    2013-09-01

    The Brownian motion model may not be a completely realistic model for asset prices because in real asset prices the drift μ and volatility σ may change over time. Presently we consider a model in which the parameter x = (μ,σ) is such that its value x (t + Δt) at a short time Δt ahead of the present time t depends on the value of the asset price at time t + Δt as well as the present parameter value x(t) and m-1 other parameter values before time t via a conditional distribution. The Malaysian stock prices are used to compare the performance of the Brownian motion model with fixed parameter with that of the model with stochastic parameter.

  16. Improving weather predictability by including land-surface model parameter uncertainty

    Science.gov (United States)

    Orth, Rene; Dutra, Emanuel; Pappenberger, Florian

    2016-04-01

    The land surface forms an important component of Earth system models and interacts nonlinearly with other parts such as ocean and atmosphere. To capture the complex and heterogenous hydrology of the land surface, land surface models include a large number of parameters impacting the coupling to other components of the Earth system model. Focusing on ECMWF's land-surface model HTESSEL we present in this study a comprehensive parameter sensitivity evaluation using multiple observational datasets in Europe. We select 6 poorly constrained effective parameters (surface runoff effective depth, skin conductivity, minimum stomatal resistance, maximum interception, soil moisture stress function shape, total soil depth) and explore their sensitivity to model outputs such as soil moisture, evapotranspiration and runoff using uncoupled simulations and coupled seasonal forecasts. Additionally we investigate the possibility to construct ensembles from the multiple land surface parameters. In the uncoupled runs we find that minimum stomatal resistance and total soil depth have the most influence on model performance. Forecast skill scores are moreover sensitive to the same parameters as HTESSEL performance in the uncoupled analysis. We demonstrate the robustness of our findings by comparing multiple best performing parameter sets and multiple randomly chosen parameter sets. We find better temperature and precipitation forecast skill with the best-performing parameter perturbations demonstrating representativeness of model performance across uncoupled (and hence less computationally demanding) and coupled settings. Finally, we construct ensemble forecasts from ensemble members derived with different best-performing parameterizations of HTESSEL. This incorporation of parameter uncertainty in the ensemble generation yields an increase in forecast skill, even beyond the skill of the default system. Orth, R., E. Dutra, and F. Pappenberger, 2016: Improving weather predictability by

  17. Estimation of shape model parameters for 3D surfaces

    DEFF Research Database (Denmark)

    Erbou, Søren Gylling Hemmingsen; Darkner, Sune; Fripp, Jurgen

    2008-01-01

    is applied to a database of 3D surfaces from a section of the porcine pelvic bone extracted from 33 CT scans. A leave-one-out validation shows that the parameters of the first 3 modes of the shape model can be predicted with a mean difference within [-0.01,0.02] from the true mean, with a standard deviation......Statistical shape models are widely used as a compact way of representing shape variation. Fitting a shape model to unseen data enables characterizing the data in terms of the model parameters. In this paper a Gauss-Newton optimization scheme is proposed to estimate shape model parameters of 3D...... surfaces using distance maps, which enables the estimation of model parameters without the requirement of point correspondence. For applications with acquisition limitations such as speed and cost, this formulation enables the fitting of a statistical shape model to arbitrarily sampled data. The method...

  18. Determination of the Corona model parameters with artificial neural networks

    International Nuclear Information System (INIS)

    Ahmet, Nayir; Bekir, Karlik; Arif, Hashimov

    2005-01-01

    Full text : The aim of this study is to calculate new model parameters taking into account the corona of electrical transmission line wires. For this purpose, a neural network modeling proposed for the corona frequent characteristics modeling. Then this model was compared with the other model developed at the Polytechnic Institute of Saint Petersburg. The results of development of the specified corona model for calculation of its influence on the wave processes in multi-wires line and determination of its parameters are submitted. Results of obtained calculation equations are brought for electrical transmission line with allowance for superficial effect in the ground and wires with reference to developed corona model

  19. Fuzzy Stochastic Petri Nets for Modeling Biological Systems with Uncertain Kinetic Parameters.

    Science.gov (United States)

    Liu, Fei; Heiner, Monika; Yang, Ming

    2016-01-01

    Stochastic Petri nets (SPNs) have been widely used to model randomness which is an inherent feature of biological systems. However, for many biological systems, some kinetic parameters may be uncertain due to incomplete, vague or missing kinetic data (often called fuzzy uncertainty), or naturally vary, e.g., between different individuals, experimental conditions, etc. (often called variability), which has prevented a wider application of SPNs that require accurate parameters. Considering the strength of fuzzy sets to deal with uncertain information, we apply a specific type of stochastic Petri nets, fuzzy stochastic Petri nets (FSPNs), to model and analyze biological systems with uncertain kinetic parameters. FSPNs combine SPNs and fuzzy sets, thereby taking into account both randomness and fuzziness of biological systems. For a biological system, SPNs model the randomness, while fuzzy sets model kinetic parameters with fuzzy uncertainty or variability by associating each parameter with a fuzzy number instead of a crisp real value. We introduce a simulation-based analysis method for FSPNs to explore the uncertainties of outputs resulting from the uncertainties associated with input parameters, which works equally well for bounded and unbounded models. We illustrate our approach using a yeast polarization model having an infinite state space, which shows the appropriateness of FSPNs in combination with simulation-based analysis for modeling and analyzing biological systems with uncertain information.

  20. Spatio-temporal modeling of nonlinear distributed parameter systems

    CERN Document Server

    Li, Han-Xiong

    2011-01-01

    The purpose of this volume is to provide a brief review of the previous work on model reduction and identifi cation of distributed parameter systems (DPS), and develop new spatio-temporal models and their relevant identifi cation approaches. In this book, a systematic overview and classifi cation on the modeling of DPS is presented fi rst, which includes model reduction, parameter estimation and system identifi cation. Next, a class of block-oriented nonlinear systems in traditional lumped parameter systems (LPS) is extended to DPS, which results in the spatio-temporal Wiener and Hammerstein s

  1. Some tests for parameter constancy in cointegrated VAR-models

    DEFF Research Database (Denmark)

    Hansen, Henrik; Johansen, Søren

    1999-01-01

    Some methods for the evaluation of parameter constancy in vector autoregressive (VAR) models are discussed. Two different ways of re-estimating the VAR model are proposed; one in which all parameters are estimated recursively based upon the likelihood function for the first observations, and anot...... be applied to test the constancy of the long-run parameters in the cointegrated VAR-model. All results are illustrated using a model for the term structure of interest rates on US Treasury securities. ...

  2. A Decomposition Model for HPLC-DAD Data Set and Its Solution by Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Lizhi Cui

    2014-01-01

    Full Text Available This paper proposes a separation method, based on the model of Generalized Reference Curve Measurement and the algorithm of Particle Swarm Optimization (GRCM-PSO, for the High Performance Liquid Chromatography with Diode Array Detection (HPLC-DAD data set. Firstly, initial parameters are generated to construct reference curves for the chromatogram peaks of the compounds based on its physical principle. Then, a General Reference Curve Measurement (GRCM model is designed to transform these parameters to scalar values, which indicate the fitness for all parameters. Thirdly, rough solutions are found by searching individual target for every parameter, and reinitialization only around these rough solutions is executed. Then, the Particle Swarm Optimization (PSO algorithm is adopted to obtain the optimal parameters by minimizing the fitness of these new parameters given by the GRCM model. Finally, spectra for the compounds are estimated based on the optimal parameters and the HPLC-DAD data set. Through simulations and experiments, following conclusions are drawn: (1 the GRCM-PSO method can separate the chromatogram peaks and spectra from the HPLC-DAD data set without knowing the number of the compounds in advance even when severe overlap and white noise exist; (2 the GRCM-PSO method is able to handle the real HPLC-DAD data set.

  3. Hematologic parameters in raptor species in a rehabilitation setting before release.

    Science.gov (United States)

    Black, Peter A; McRuer, David L; Horne, Leigh-Ann

    2011-09-01

    To be considered for release, raptors undergoing rehabilitation must have recovered from their initial injury in addition to being clinically healthy. For that purpose, a good understanding of reference hematologic values is important in determining release criteria for raptors in a rehabilitation setting. In this study, retrospective data were tabulated from clinically normal birds within 10 days of release from a rehabilitation facility. Hematologic values were compiled from 71 red-tailed hawks (Buteo jamaicensis), 54 Eastern screech owls (Megascops asio), 31 Cooper's hawks (Accipiter cooperii), 30 great-horned owls (Bubo virginianus), 28 barred owls (Strix varia), 16 bald eagles (Haliaeetus leucocephalus), and 12 broad-winged hawks (Buteo platypterus). Parameters collected included a white blood cell count and differential, hematocrit, and total protein concentration. Comparisons were made among species and among previously published reports of reference hematologic values in free-ranging birds or permanently captive birds. This is the first published report of reference values for Eastern screech owls, barred owls, and broad-winged hawks; and the first prerelease reference values for all species undergoing rehabilitation. These data can be used as a reference when developing release criteria for rehabilitated raptors.

  4. Measurement of a set of '60 Co beam parameters by means of a mini-phantom

    International Nuclear Information System (INIS)

    Grudeva, T.; Kosturkov, I.; Videva, V.; Yaneva, M.

    2000-01-01

    This work present the application of a recent developed methodology for measurement and differentiation of head scattered and phantom scattered contribution to absorbed dose measured on 'ROCUS' - M therapy unit. By means of a suitably constructed mini-phantom specific for each therapy unit parameters were measured, including: Total Scattered Factor, Collimator Scattered Factor and Volume Scattered Ratio. From the result obtained Phantom Scatter Factor was calculated. Output Ratios were obtained in both mini-phantom (Collimator Scatter Factor) and in water phantom (Total Scatter Factor). The results illustrate the variation of the collimator and phantom scattered components with changing of collimator setting. Volume Scatter Ratio expresses the additional scatter when measurements are performed in a large water phantom compared to the scatter in a mini-phantom. Its contribution to the dose is evaluated. A conclusion about the contribution of head scattered and phantom scattered photons to the dose was made. The advantages of the mini-factor application for measurement of Collimator Scatter Factor were considered in details

  5. Determining extreme parameter correlation in ground water models

    DEFF Research Database (Denmark)

    Hill, Mary Cole; Østerby, Ole

    2003-01-01

    In ground water flow system models with hydraulic-head observations but without significant imposed or observed flows, extreme parameter correlation generally exists. As a result, hydraulic conductivity and recharge parameters cannot be uniquely estimated. In complicated problems, such correlation...... correlation coefficients with absolute values that round to 1.00 were good indicators of extreme parameter correlation, but smaller values were not necessarily good indicators of lack of correlation and resulting unique parameter estimates; (2) the SVD may be more difficult to interpret than parameter...

  6. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  7. Modeling and Parameter Estimation of a Small Wind Generation System

    Directory of Open Access Journals (Sweden)

    Carlos A. Ramírez Gómez

    2013-11-01

    Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.

  8. Parameter estimation of variable-parameter nonlinear Muskingum model using excel solver

    Science.gov (United States)

    Kang, Ling; Zhou, Liwei

    2018-02-01

    Abstract . The Muskingum model is an effective flood routing technology in hydrology and water resources Engineering. With the development of optimization technology, more and more variable-parameter Muskingum models were presented to improve effectiveness of the Muskingum model in recent decades. A variable-parameter nonlinear Muskingum model (NVPNLMM) was proposed in this paper. According to the results of two real and frequently-used case studies by various models, the NVPNLMM could obtain better values of evaluation criteria, which are used to describe the superiority of the estimated outflows and compare the accuracies of flood routing using various models, and the optimal estimated outflows by the NVPNLMM were closer to the observed outflows than the ones by other models.

  9. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    2007-01-01

    This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil with focus on the horizontal sliding and rocking. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines......-parameter models with respect to the prediction of the maximum response during excitation and the geometrical damping related to free vibrations of a footing....

  10. Accurate prediction of severe allergic reactions by a small set of environmental parameters (NDVI, temperature).

    Science.gov (United States)

    Notas, George; Bariotakis, Michail; Kalogrias, Vaios; Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias

    2015-01-01

    Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions.

  11. A method for model identification and parameter estimation

    International Nuclear Information System (INIS)

    Bambach, M; Heinkenschloss, M; Herty, M

    2013-01-01

    We propose and analyze a new method for the identification of a parameter-dependent model that best describes a given system. This problem arises, for example, in the mathematical modeling of material behavior where several competing constitutive equations are available to describe a given material. In this case, the models are differential equations that arise from the different constitutive equations, and the unknown parameters are coefficients in the constitutive equations. One has to determine the best-suited constitutive equations for a given material and application from experiments. We assume that the true model is one of the N possible parameter-dependent models. To identify the correct model and the corresponding parameters, we can perform experiments, where for each experiment we prescribe an input to the system and observe a part of the system state. Our approach consists of two stages. In the first stage, for each pair of models we determine the experiment, i.e. system input and observation, that best differentiates between the two models, and measure the distance between the two models. Then we conduct N(N − 1) or, depending on the approach taken, N(N − 1)/2 experiments and use the result of the experiments as well as the previously computed model distances to determine the true model. We provide sufficient conditions on the model distances and measurement errors which guarantee that our approach identifies the correct model. Given the model, we identify the corresponding model parameters in the second stage. The problem in the second stage is a standard parameter estimation problem and we use a method suitable for the given application. We illustrate our approach on three examples, including one where the models are elliptic partial differential equations with different parameterized right-hand sides and an example where we identify the constitutive equation in a problem from computational viscoplasticity. (paper)

  12. Meta-analysis of choice set generation effects on route choice model estimates and predictions

    DEFF Research Database (Denmark)

    Prato, Carlo Giacomo

    2012-01-01

    are applied for model estimation and results are compared to the ‘true model estimates’. Last, predictions from the simulation of models estimated with objective choice sets are compared to the ‘postulated predicted routes’. A meta-analytical approach allows synthesizing the effect of judgments......Large scale applications of behaviorally realistic transport models pose several challenges to transport modelers on both the demand and the supply sides. On the supply side, path-based solutions to the user assignment equilibrium problem help modelers in enhancing the route choice behavior...... modeling, but require them to generate choice sets by selecting a path generation technique and its parameters according to personal judgments. This paper proposes a methodology and an experimental setting to provide general indications about objective judgments for an effective route choice set generation...

  13. Characterizing parameter sensitivity and uncertainty for a snow model across hydroclimatic regimes

    Science.gov (United States)

    He, Minxue; Hogue, Terri S.; Franz, Kristie J.; Margulis, Steven A.; Vrugt, Jasper A.

    2011-01-01

    The National Weather Service (NWS) uses the SNOW17 model to forecast snow accumulation and ablation processes in snow-dominated watersheds nationwide. Successful application of the SNOW17 relies heavily on site-specific estimation of model parameters. The current study undertakes a comprehensive sensitivity and uncertainty analysis of SNOW17 model parameters using forcing and snow water equivalent (SWE) data from 12 sites with differing meteorological and geographic characteristics. The Generalized Sensitivity Analysis and the recently developed Differential Evolution Adaptive Metropolis (DREAM) algorithm are utilized to explore the parameter space and assess model parametric and predictive uncertainty. Results indicate that SNOW17 parameter sensitivity and uncertainty generally varies between sites. Of the six hydroclimatic characteristics studied, only air temperature shows strong correlation with the sensitivity and uncertainty ranges of two parameters, while precipitation is highly correlated with the uncertainty of one parameter. Posterior marginal distributions of two parameters are also shown to be site-dependent in terms of distribution type. The SNOW17 prediction ensembles generated by the DREAM-derived posterior parameter sets contain most of the observed SWE. The proposed uncertainty analysis provides posterior parameter information on parameter uncertainty and distribution types that can serve as a foundation for a data assimilation framework for hydrologic models.

  14. Statistical osteoporosis models using composite finite elements: a parameter study.

    Science.gov (United States)

    Wolfram, Uwe; Schwen, Lars Ole; Simon, Ulrich; Rumpf, Martin; Wilke, Hans-Joachim

    2009-09-18

    Osteoporosis is a widely spread disease with severe consequences for patients and high costs for health care systems. The disease is characterised by a loss of bone mass which induces a loss of mechanical performance and structural integrity. It was found that transverse trabeculae are thinned and perforated while vertical trabeculae stay intact. For understanding these phenomena and the mechanisms leading to fractures of trabecular bone due to osteoporosis, numerous researchers employ micro-finite element models. To avoid disadvantages in setting up classical finite element models, composite finite elements (CFE) can be used. The aim of the study is to test the potential of CFE. For that, a parameter study on numerical lattice samples with statistically simulated, simplified osteoporosis is performed. These samples are subjected to compression and shear loading. Results show that the biggest drop of compressive stiffness is reached for transverse isotropic structures losing 32% of the trabeculae (minus 89.8% stiffness). The biggest drop in shear stiffness is found for an isotropic structure also losing 32% of the trabeculae (minus 67.3% stiffness). The study indicates that losing trabeculae leads to a worse drop of macroscopic stiffness than thinning of trabeculae. The results further demonstrate the advantages of CFEs for simulating micro-structured samples.

  15. Low-dimensional modeling of a driven cavity flow with two free parameters

    DEFF Research Database (Denmark)

    Jørgensen, Bo Hoffmann; Sørensen, Jens Nørkær; Brøns, Morten

    2003-01-01

    -dimensional models. SPOD is capable of transforming data organized in different sets separately while still producing orthogonal modes. A low-dimensional model is constructed and used for analyzing bifurcations occurring in the flow in the lid-driven cavity with a rotating rod. The model allows one of the free...... parameters to appear in the inhomogeneous boundary conditions without the addition of any constraints. This is necessary because both the driving lid and the rotating rod are controlled simultaneously. Apparently, the results reported for this model are the first to be obtained for a low-dimensional model...... based on projections on POD modes for more than one free parameter....

  16. A distributed approach for parameters estimation in System Biology models

    International Nuclear Information System (INIS)

    Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.

    2009-01-01

    Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.

  17. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines and other models applied to fast evaluation of struct......This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines and other models applied to fast evaluation...... response during excitation and the geometrical damping related to free vibrations of a hexagonal footing. The optimal order of a lumped-parameter model is determined for each degree of freedom, i.e. horizontal and vertical translation as well as torsion and rocking. In particular, the necessity of coupling...

  18. Parameter dependence and outcome dependence in dynamical models for state vector reduction

    International Nuclear Information System (INIS)

    Ghirardi, G.C.; Grassi, R.; Butterfield, J.; Fleming, G.N.

    1993-01-01

    The authors apply the distinction between parameter independence and outcome independence to the linear and nonlinear models of a recent nonrelativistic theory of continuous state vector reduction. It is shown that in the nonlinear model there is a set of realizations of the stochastic process that drives the state vector reduction for which parameter independence is violated for parallel spin components in the EPR-Bohm setup. Such a set has an appreciable probability of occurrence (∼ 1/2). On the other hand, the linear model exhibits only extremely small parameter dependence effects. Some specific features of the models are investigated and it is recalled that, as has been pointed out recently, to be able to speak of definite outcomes (or equivalently of possessed objective elements of reality) at finite times, the criteria for their attribution to physical systems must be slightly changed. The concluding section is devoted to a detailed discussion of the difficulties met when attempting to take, as a starting point for the formulation of a relativistic theory, a nonrelativistic scheme which exhibits parameter dependence. Here the authors derive a theorem which identifies the precise sense in which the occurrence of parameter dependence forbids a genuinely relativistic generalization. Finally, the authors show how the appreciable parameter dependence of the nonlinear model gives rise to problems with relativity, while the extremely weak parameter dependence of the linear model does not give rise to any difficulty, provided the appropriate criteria for the attribution of definite outcomes are taken into account. 19 refs

  19. Parameter estimation for groundwater models under uncertain irrigation data

    Science.gov (United States)

    Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  20. Transformations among CE–CVM model parameters for ...

    Indian Academy of Sciences (India)

    Unknown

    parameters which exclusively represent interactions of the higher order systems. Such a procedure is presen- ted in detail in this communication. Furthermore, the details of transformations required to express the model parameters in one basis from those defined in another basis for the same system are also presented.

  1. Transformations among CE–CVM model parameters for ...

    Indian Academy of Sciences (India)

    ... of parameters which exclusively represent interactions of the higher order systems. Such a procedure is presented in detail in this communication. Furthermore, the details of transformations required to express the model parameters in one basis from those defined in another basis for the same system are also presented.

  2. Prior distributions for item parameters in IRT models

    NARCIS (Netherlands)

    Matteucci, M.; S. Mignani, Prof.; Veldkamp, Bernard P.

    2012-01-01

    The focus of this article is on the choice of suitable prior distributions for item parameters within item response theory (IRT) models. In particular, the use of empirical prior distributions for item parameters is proposed. Firstly, regression trees are implemented in order to build informative

  3. Retrospective forecast of ETAS model with daily parameters estimate

    Science.gov (United States)

    Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang

    2016-04-01

    We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.

  4. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Kaylie Rasmuson; Kurt Rautenstrauch

    2003-06-20

    This analysis is one of nine technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. It documents input parameters for the biosphere model, and supports the use of the model to develop Biosphere Dose Conversion Factors (BDCF). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in the biosphere Technical Work Plan (TWP, BSC 2003a). It should be noted that some documents identified in Figure 1-1 may be under development and therefore not available at the time this document is issued. The ''Biosphere Model Report'' (BSC 2003b) describes the ERMYN and its input parameters. This analysis report, ANL-MGR-MD-000006, ''Agricultural and Environmental Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. This report defines and justifies values for twelve parameters required in the biosphere model. These parameters are related to use of contaminated groundwater to grow crops. The parameter values recommended in this report are used in the soil, plant, and carbon-14 submodels of the ERMYN.

  5. Stochastic hyperelastic modeling considering dependency of material parameters

    Science.gov (United States)

    Caylak, Ismail; Penner, Eduard; Dridger, Alex; Mahnken, Rolf

    2018-03-01

    This paper investigates the uncertainty of a hyperelastic model by treating random material parameters as stochastic variables. For its stochastic discretization a polynomial chaos expansion (PCE) is used. An important aspect in our work is the consideration of stochastic dependencies in the stochastic modeling of Ogden's material model. To this end, artificial experiments are generated using the auto-regressive moving average process based on real experiments. The parameter identification for all data provides statistics of Ogden's material parameters, which are subsequently used for stochastic modeling. Stochastic dependencies are incorporated into the PCE using a Nataf transformation from dependent distributed random variables to independent standard normal distributed ones. The representative numerical example shows that our proposed method adequately takes into account the stochastic dependencies of Ogden's material parameters.

  6. A compact cyclic plasticity model with parameter evolution

    DEFF Research Database (Denmark)

    Krenk, Steen; Tidemann, L.

    2017-01-01

    by the Armstrong–Frederick model, contained as a special case of the present model for a particular choice of the shape parameter. In contrast to previous work, where shaping the stress-strain loops is derived from multiple internal stress states, this effect is here represented by a single parameter......The paper presents a compact model for cyclic plasticity based on energy in terms of external and internal variables, and plastic yielding described by kinematic hardening and a flow potential with an additive term controlling the nonlinear cyclic hardening. The model is basically described by five...... parameters: external and internal stiffness, a yield stress and a limiting ultimate stress, and finally a parameter controlling the gradual development of plastic deformation. Calibration against numerous experimental results indicates that typically larger plastic strains develop than predicted...

  7. Parameter Estimation for the Thurstone Case III Model.

    Science.gov (United States)

    Mackay, David B.; Chaiy, Seoil

    1982-01-01

    The ability of three estimation criteria to recover parameters of the Thurstone Case V and Case III models from comparative judgment data was investigated via Monte Carlo techniques. Significant differences in recovery are shown to exist. (Author/JKS)

  8. Improved parameter estimation for hydrological models using weighted object functions

    NARCIS (Netherlands)

    Stein, A.; Zaadnoordijk, W.J.

    1999-01-01

    This paper discusses the sensitivity of calibration of hydrological model parameters to different objective functions. Several functions are defined with weights depending upon the hydrological background. These are compared with an objective function based upon kriging. Calibration is applied to

  9. Partial sum approaches to mathematical parameters of some growth models

    Science.gov (United States)

    Korkmaz, Mehmet

    2016-04-01

    Growth model is fitted by evaluating the mathematical parameters, a, b and c. In this study, the method of partial sums were used. For finding the mathematical parameters, firstly three partial sums were used, secondly four partial sums were used, thirdly five partial sums were used and finally N partial sums were used. The purpose of increasing the partial decomposition is to produce a better phase model which gives a better expected value by minimizing error sum of squares in the interval used.

  10. Post-mortem computed tomography: Technical principles and recommended parameter settings for high-resolution imaging.

    Science.gov (United States)

    Gascho, Dominic; Thali, Michael J; Niemann, Tilo

    2018-01-01

    Post-mortem computed tomography (PMCT) has become a standard procedure in many forensic institutes worldwide. However, the standard scan protocols offered by vendors are optimised for clinical radiology and its main considerations regarding computed tomography (CT), namely, radiation exposure and motion artefacts. Thus, these protocols aim at low-dose imaging and fast imaging techniques. However, these considerations are negligible in post-mortem imaging, which allows for significantly increased image quality. Therefore, the parameters have to be adjusted to achieve the best image quality. Several parameters affect the image quality differently and have to be weighed against each other to achieve the best image quality for different diagnostic interests. There are two main groups of parameters that are adjustable by the user: acquisition parameters and reconstruction parameters. Acquisition parameters have to be selected prior to scanning and affect the raw data composition. In contrast, reconstruction parameters affect the calculation of the slice stacks from the raw data. This article describes the CT principles from acquiring image data to post-processing and provides an overview of the significant parameters for increasing the image quality in PMCT. Based on the CT principles, the effects of these parameters on the contrast, noise, resolution and frequently occurring artefacts are described. This article provides a guide for the performance of PMCT in morgues, clinical facilities or private practices.

  11. Parameter estimation in stochastic rainfall-runoff models

    DEFF Research Database (Denmark)

    Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur

    2006-01-01

    A parameter estimation method for stochastic rainfall-runoff models is presented. The model considered in the paper is a conceptual stochastic model, formulated in continuous-discrete state space form. The model is small and a fully automatic optimization is, therefore, possible for estimating all....... For a comparison the parameters are also estimated by an output error method, where the sum of squared simulation error is minimized. The former methodology is optimal for short-term prediction whereas the latter is optimal for simulations. Hence, depending on the purpose it is possible to select whether...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...

  12. Luminescence model with quantum impact parameter for low energy ions

    CERN Document Server

    Cruz-Galindo, H S; Martínez-Davalos, A; Belmont-Moreno, E; Galindo, S

    2002-01-01

    We have modified an analytical model of induced light production by energetic ions interacting in scintillating materials. The original model is based on the distribution of energy deposited by secondary electrons produced along the ion's track. The range of scattered electrons, and thus the energy distribution, depends on a classical impact parameter between the electron and the ion's track. The only adjustable parameter of the model is the quenching density rho sub q. The modification here presented, consists in proposing a quantum impact parameter that leads to a better fit of the model to the experimental data at low incident ion energies. The light output response of CsI(Tl) detectors to low energy ions (<3 MeV/A) is fitted with the modified model and comparison is made to the original model.

  13. Transformations among CE–CVM model parameters for ...

    Indian Academy of Sciences (India)

    Unknown

    (CECs) of a higher order system in terms of those of the lower order subsystems and to an independent set of parameters which exclusively represent interactions of the higher order systems. Such a procedure is presen- ted in detail in this communication. Furthermore, the details of transformations required to express the ...

  14. Model-based verification method for solving the parameter uncertainty in the train control system

    International Nuclear Information System (INIS)

    Cheng, Ruijun; Zhou, Jin; Chen, Dewang; Song, Yongduan

    2016-01-01

    This paper presents a parameter analysis method to solve the parameter uncertainty problem for hybrid system and explore the correlation of key parameters for distributed control system. For improving the reusability of control model, the proposed approach provides the support for obtaining the constraint sets of all uncertain parameters in the abstract linear hybrid automata (LHA) model when satisfying the safety requirements of the train control system. Then, in order to solve the state space explosion problem, the online verification method is proposed to monitor the operating status of high-speed trains online because of the real-time property of the train control system. Furthermore, we construct the LHA formal models of train tracking model and movement authority (MA) generation process as cases to illustrate the effectiveness and efficiency of the proposed method. In the first case, we obtain the constraint sets of uncertain parameters to avoid collision between trains. In the second case, the correlation of position report cycle and MA generation cycle is analyzed under both the normal and the abnormal condition influenced by packet-loss factor. Finally, considering stochastic characterization of time distributions and real-time feature of moving block control system, the transient probabilities of wireless communication process are obtained by stochastic time petri nets. - Highlights: • We solve the parameters uncertainty problem by using model-based method. • We acquire the parameter constraint sets by verifying linear hybrid automata models. • Online verification algorithms are designed to monitor the high-speed trains. • We analyze the correlation of key parameters and uncritical parameters. • The transient probabilities are obtained by using reliability analysis.

  15. Agricultural and Environmental Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    K. Rasmuson; K. Rautenstrauch

    2004-01-01

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters

  16. Spatial extrapolation of light use efficiency model parameters to predict gross primary production

    Directory of Open Access Journals (Sweden)

    Karsten Schulz

    2011-12-01

    Full Text Available To capture the spatial and temporal variability of the gross primary production as a key component of the global carbon cycle, the light use efficiency modeling approach in combination with remote sensing data has shown to be well suited. Typically, the model parameters, such as the maximum light use efficiency, are either set to a universal constant or to land class dependent values stored in look-up tables. In this study, we employ the machine learning technique support vector regression to explicitly relate the model parameters of a light use efficiency model calibrated at several FLUXNET sites to site-specific characteristics obtained by meteorological measurements, ecological estimations and remote sensing data. A feature selection algorithm extracts the relevant site characteristics in a cross-validation, and leads to an individual set of characteristic attributes for each parameter. With this set of attributes, the model parameters can be estimated at sites where a parameter calibration is not possible due to the absence of eddy covariance flux measurement data. This will finally allow a spatially continuous model application. The performance of the spatial extrapolation scheme is evaluated with a cross-validation approach, which shows the methodology to be well suited to recapture the variability of gross primary production across the study sites.

  17. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  18. Updating parameters of the chicken processing line model

    DEFF Research Database (Denmark)

    Kurowicka, Dorota; Nauta, Maarten; Jozwiak, Katarzyna

    2010-01-01

    A mathematical model of chicken processing that quantitatively describes the transmission of Campylobacter on chicken carcasses from slaughter to chicken meat product has been developed in Nauta et al. (2005). This model was quantified with expert judgment. Recent availability of data allows...... updating parameters of the model to better describe processes observed in slaughterhouses. We propose Bayesian updating as a suitable technique to update expert judgment with microbiological data. Berrang and Dickens’s data are used to demonstrate performance of this method in updating parameters...... of the chicken processing line model....

  19. Lumped-Parameter Models for Windturbine Footings on Layered Ground

    DEFF Research Database (Denmark)

    Andersen, Lars

    The design of modern wind turbines is typically based on lifetime analyses using aeroelastic codes. In this regard, the impedance of the foundations must be described accurately without increasing the overall size of the computationalmodel significantly. This may be obtained by the fitting...... of a lumped-parameter model to the results of a rigorous model or experimental results. In this paper, guidelines are given for the formulation of such lumped-parameter models and examples are given in which the models are utilised for the analysis of a wind turbine supported by a surface footing on a layered...

  20. Parameter estimation and model selection in computational biology.

    Directory of Open Access Journals (Sweden)

    Gabriele Lillacci

    2010-03-01

    Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.

  1. A new level set model for multimaterial flows

    Energy Technology Data Exchange (ETDEWEB)

    Starinshak, David P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Karni, Smadar [Univ. of Michigan, Ann Arbor, MI (United States). Dept. of Mathematics; Roe, Philip L. [Univ. of Michigan, Ann Arbor, MI (United States). Dept. of AerospaceEngineering

    2014-01-08

    We present a new level set model for representing multimaterial flows in multiple space dimensions. Instead of associating a level set function with a specific fluid material, the function is associated with a pair of materials and the interface that separates them. A voting algorithm collects sign information from all level sets and determines material designations. M(M ₋1)/2 level set functions might be needed to represent a general M-material configuration; problems of practical interest use far fewer functions, since not all pairs of materials share an interface. The new model is less prone to producing indeterminate material states, i.e. regions claimed by more than one material (overlaps) or no material at all (vacuums). It outperforms existing material-based level set models without the need for reinitialization schemes, thereby avoiding additional computational costs and preventing excessive numerical diffusion.

  2. Influence of parameter values on the oscillation sensitivities of two p53-Mdm2 models.

    Science.gov (United States)

    Cuba, Christian E; Valle, Alexander R; Ayala-Charca, Giancarlo; Villota, Elizabeth R; Coronado, Alberto M

    2015-09-01

    Biomolecular networks that present oscillatory behavior are ubiquitous in nature. While some design principles for robust oscillations have been identified, it is not well understood how these oscillations are affected when the kinetic parameters are constantly changing or are not precisely known, as often occurs in cellular environments. Many models of diverse complexity level, for systems such as circadian rhythms, cell cycle or the p53 network, have been proposed. Here we assess the influence of hundreds of different parameter sets on the sensitivities of two configurations of a well-known oscillatory system, the p53 core network. We show that, for both models and all parameter sets, the parameter related to the p53 positive feedback, i.e. self-promotion, is the only one that presents sizeable sensitivities on extrema, periods and delay. Moreover, varying the parameter set values to change the dynamical characteristics of the response is more restricted in the simple model, whereas the complex model shows greater tunability. These results highlight the importance of the presence of specific network patterns, in addition to the role of parameter values, when we want to characterize oscillatory biochemical systems.

  3. MATHEMATICAL MODELING OF FLOW PARAMETERS FOR SINGLE WIND TURBINE

    Directory of Open Access Journals (Sweden)

    2016-01-01

    Full Text Available It is known that on the territory of the Russian Federation the construction of several large wind farms is planned. The tasks connected with design and efficiency evaluation of wind farm work are in demand today. One of the possible directions in design is connected with mathematical modeling. The method of large eddy simulation developed within the direction of computational hydrodynamics allows to reproduce unsteady structure of the flow in details and to determine various integrated values. The calculation of work for single wind turbine installation by means of large eddy simulation and Actuator Line Method along the turbine blade is given in this work. For problem definition the numerical method in the form of a box was considered and the adapted unstructured grid was used.The mathematical model included the main equations of continuity and momentum equations for incompressible fluid. The large-scale vortex structures were calculated by means of integration of the filtered equations. The calculation was carried out with Smagorinsky model for determination of subgrid scale turbulent viscosity. The geometrical parametersof wind turbine were set proceeding from open sources in the Internet.All physical values were defined at center of computational cell. The approximation of items in equations was ex- ecuted with the second order of accuracy for time and space. The equations for coupling velocity and pressure were solved by means of iterative algorithm PIMPLE. The total quantity of the calculated physical values on each time step was equal to 18. So, the resources of a high performance cluster were required.As a result of flow calculation in wake for the three-bladed turbine average and instantaneous values of velocity, pressure, subgrid kinetic energy and turbulent viscosity, components of subgrid stress tensor were worked out. The re- ceived results matched the known results of experiments and numerical simulation, testify the opportunity

  4. A three-parameter model for fatigue crack growth data analysis

    Directory of Open Access Journals (Sweden)

    A. De Iorio

    2012-07-01

    Full Text Available A three-parameters model for the interpolation of fatigue crack propagation data is proposed. It has been validated by a Literature data set obtained by testing 180 M(T specimens under three different loading levels. In details, it is highlighted that the results of the analysis carried out by means of the proposed model are more smooth and clear than those obtainable using other methods or models. Also, the parameters of the model have been computed and some peculiarities have been picked out.

  5. Data Set for Emperical Validation of Double Skin Facade Model

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Jensen, Rasmus Lund; Heiselberg, Per

    2008-01-01

    the model is empirically validated and its' limitations for the DSF modeling are identified. Correspondingly, the existence and availability of the experimental data is very essential. Three sets of accurate empirical data for validation of DSF modeling with building simulation software were produced within...

  6. Repetitive Identification of Structural Systems Using a Nonlinear Model Parameter Refinement Approach

    Directory of Open Access Journals (Sweden)

    Jeng-Wen Lin

    2009-01-01

    Full Text Available This paper proposes a statistical confidence interval based nonlinear model parameter refinement approach for the health monitoring of structural systems subjected to seismic excitations. The developed model refinement approach uses the 95% confidence interval of the estimated structural parameters to determine their statistical significance in a least-squares regression setting. When the parameters' confidence interval covers the zero value, it is statistically sustainable to truncate such parameters. The remaining parameters will repetitively undergo such parameter sifting process for model refinement until all the parameters' statistical significance cannot be further improved. This newly developed model refinement approach is implemented for the series models of multivariable polynomial expansions: the linear, the Taylor series, and the power series model, leading to a more accurate identification as well as a more controllable design for system vibration control. Because the statistical regression based model refinement approach is intrinsically used to process a “batch” of data and obtain an ensemble average estimation such as the structural stiffness, the Kalman filter and one of its extended versions is introduced to the refined power series model for structural health monitoring.

  7. Development of new model for high explosives detonation parameters calculation

    Directory of Open Access Journals (Sweden)

    Jeremić Radun

    2012-01-01

    Full Text Available The simple semi-empirical model for calculation of detonation pressure and velocity for CHNO explosives has been developed, which is based on experimental values of detonation parameters. Model uses Avakyan’s method for determination of detonation products' chemical composition, and is applicable in wide range of densities. Compared with the well-known Kamlet's method and numerical model of detonation based on BKW EOS, the calculated values from proposed model have significantly better accuracy.

  8. Analysis of the Relationship Between Physical Environmental Parameters and Beach Water Quality in a Subtropical Setting

    Science.gov (United States)

    Zhu, X.; Wang, J. D.; Elmir, S.; Solo-Gabriele, H. M.; Wright, M. E.; Abdelzaher, A.

    2006-12-01

    Fecal Indicator Bacteria(FIB) are found in high concentrations in sewage water, and thus are used to indicate whether there is fecal material related pathogen present and to determine whether a beach is safe for recreational use. Studies have shown, however, in subtropical regions, FIB concentrations above EPA standards may be present in the absence of known point sources of human or animal waste, thus reducing the efficacy of FIB beach monitoring programs. An interdisciplinary study is being conducted in Miami, Florida , the goal is to understand the sources and behavior of FIB on a beach without point source loads and also to improve beach health hazard warnings in subtropical regions. This study, examines relationship between enterococci (EPA recommended FIB for use in marine water) and physical environmental parameters such as rain, tide and wind. FIB data employed include Florida Department of Health weekly beach monitoring enterococci (ENT) data during a five year period and a two-day experiment with hourly sampling at Hobie Cat Beach on Virginia Key in the Miami metropolitan area. The environmental data consist of wind from a nearby CMAN tower, and local rain and tide. The analysis also includes data from nearby beaches monitored by the Health Department. Results show the correlation coefficient between ENT and tide at Hobie Cat Beach is positive but not significant(r=0.17). Rain events have a significant influence on ENT at Hobie Cat Beach, with a correlation coefficient of up to 0.7 while at other beaches the correlation is less than 0.2. Reasons for this aberration are being investigated. Although this is the only beach allowing dogs there are other factors of possible importance, such as tidal flats frequented by birds and weaker water circulation and exchange at this beach facing a bay rather than the ocean. Higher ENT levels (> 300CFU/100ml water) are more likely (67% of the time) to be associated with periods of onshore winds, which may affect the

  9. Using a 4D-Variational Method to Optimize Model Parameters in an Intermediate Coupled Model of ENSO

    Science.gov (United States)

    Gao, C.; Zhang, R. H.

    2017-12-01

    Large biases exist in real-time ENSO prediction, which is attributed to uncertainties in initial conditions and model parameters. Previously, a four dimentional variational (4D-Var) data assimilation system was developed for an intermediate coupled model (ICM) and used to improve ENSO modeling through optimized initial conditions. In this paper, this system is further applied to optimize model parameters. In the ICM used, one important process for ENSO is related to the anomalous temperature of subsurface water entrained into the mixed layer (Te), which is empirically and explicitly related to sea level (SL) variation, written as Te=αTe×FTe (SL). The introduced parameter, αTe, represents the strength of the thermocline effect on sea surface temperature (SST; referred as the thermocline effect). A numerical procedure is developed to optimize this model parameter through the 4D-Var assimilation of SST data in a twin experiment context with an idealized setting. Experiments having initial condition optimized only and having initial condition plus this additional model parameter optimized both are compared. It is shown that ENSO evolution can be more effectively recovered by including the additional optimization of this parameter in ENSO modeling. The demonstrated feasibility of optimizing model parameter and initial condition together through the 4D-Var method provides a modeling platform for ENSO studies. Further applications of the 4D-Var data assimilation system implemented in the ICM are also discussed.

  10. Idealized Experiments for Optimizing Model Parameters Using a 4D-Variational Method in an Intermediate Coupled Model of ENSO

    Science.gov (United States)

    Gao, Chuan; Zhang, Rong-Hua; Wu, Xinrong; Sun, Jichang

    2018-04-01

    Large biases exist in real-time ENSO prediction, which can be attributed to uncertainties in initial conditions and model parameters. Previously, a 4D variational (4D-Var) data assimilation system was developed for an intermediate coupled model (ICM) and used to improve ENSO modeling through optimized initial conditions. In this paper, this system is further applied to optimize model parameters. In the ICM used, one important process for ENSO is related to the anomalous temperature of subsurface water entrained into the mixed layer ( T e), which is empirically and explicitly related to sea level (SL) variation. The strength of the thermocline effect on SST (referred to simply as "the thermocline effect") is represented by an introduced parameter, α Te. A numerical procedure is developed to optimize this model parameter through the 4D-Var assimilation of SST data in a twin experiment context with an idealized setting. Experiments having their initial condition optimized only, and having their initial condition plus this additional model parameter optimized, are compared. It is shown that ENSO evolution can be more effectively recovered by including the additional optimization of this parameter in ENSO modeling. The demonstrated feasibility of optimizing model parameters and initial conditions together through the 4D-Var method provides a modeling platform for ENSO studies. Further applications of the 4D-Var data assimilation system implemented in the ICM are also discussed.

  11. Environmental Transport Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. Wasiolek

    2004-01-01

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573])

  12. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-10

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis

  13. Parameter uncertainty analysis of a biokinetic model of caesium

    International Nuclear Information System (INIS)

    Li, W.B.; Oeh, U.; Klein, W.; Blanchardon, E.; Puncher, M.; Leggett, R.W.; Breustedt, B.; Nosske, D.; Lopez, M.A.

    2015-01-01

    Parameter uncertainties for the biokinetic model of caesium (Cs) developed by Leggett et al. were inventoried and evaluated. The methods of parameter uncertainty analysis were used to assess the uncertainties of model predictions with the assumptions of model parameter uncertainties and distributions. Furthermore, the importance of individual model parameters was assessed by means of sensitivity analysis. The calculated uncertainties of model predictions were compared with human data of Cs measured in blood and in the whole body. It was found that propagating the derived uncertainties in model parameter values reproduced the range of bioassay data observed in human subjects at different times after intake. The maximum ranges, expressed as uncertainty factors (UFs) (defined as a square root of ratio between 97.5. and 2.5. percentiles) of blood clearance, whole-body retention and urinary excretion of Cs predicted at earlier time after intake were, respectively: 1.5, 1.0 and 2.5 at the first day; 1.8, 1.1 and 2.4 at Day 10 and 1.8, 2.0 and 1.8 at Day 100; for the late times (1000 d) after intake, the UFs were increased to 43, 24 and 31, respectively. The model parameters of transfer rates between kidneys and blood, muscle and blood and the rate of transfer from kidneys to urinary bladder content are most influential to the blood clearance and to the whole-body retention of Cs. For the urinary excretion, the parameters of transfer rates from urinary bladder content to urine and from kidneys to urinary bladder content impact mostly. The implication and effect on the estimated equivalent and effective doses of the larger uncertainty of 43 in whole-body retention in the later time, say, after Day 500 will be explored in a successive work in the framework of EURADOS. (authors)

  14. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    K. Rautenstrauch

    2004-01-01

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception

  15. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rautenstrauch

    2004-09-10

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.

  16. Estimating qualitative parameters for assessment of body balance and arm function in a simulated ambulatory setting

    NARCIS (Netherlands)

    van Meulen, Fokke; Reenalda, Jasper; Veltink, Petrus H.

    2013-01-01

    Continuous daily-life monitoring of balance control and arm function of stroke survivors in an ambulatory setting, is essential for optimal guidance of rehabilitation. In a simulated ambulatory setting, balance and arm function of seven stroke subjects is evaluated using on-body measurement systems

  17. Fuzzy GML Modeling Based on Vague Soft Sets

    Directory of Open Access Journals (Sweden)

    Bo Wei

    2017-01-01

    Full Text Available The Open Geospatial Consortium (OGC Geography Markup Language (GML explicitly represents geographical spatial knowledge in text mode. All kinds of fuzzy problems will inevitably be encountered in spatial knowledge expression. Especially for those expressions in text mode, this fuzziness will be broader. Describing and representing fuzziness in GML seems necessary. Three kinds of fuzziness in GML can be found: element fuzziness, chain fuzziness, and attribute fuzziness. Both element fuzziness and chain fuzziness belong to the reflection of the fuzziness between GML elements and, then, the representation of chain fuzziness can be replaced by the representation of element fuzziness in GML. On the basis of vague soft set theory, two kinds of modeling, vague soft set GML Document Type Definition (DTD modeling and vague soft set GML schema modeling, are proposed for fuzzy modeling in GML DTD and GML schema, respectively. Five elements or pairs, associated with vague soft sets, are introduced. Then, the DTDs and the schemas of the five elements are correspondingly designed and presented according to their different chains and different fuzzy data types. While the introduction of the five elements or pairs is the basis of vague soft set GML modeling, the corresponding DTD and schema modifications are key for implementation of modeling. The establishment of vague soft set GML enables GML to represent fuzziness and solves the problem of lack of fuzzy information expression in GML.

  18. Determination of modeling parameters for power IGBTs under pulsed power conditions

    Energy Technology Data Exchange (ETDEWEB)

    Dale, Gregory E [Los Alamos National Laboratory; Van Gordon, Jim A [U. OF MISSOURI; Kovaleski, Scott D [U. OF MISSOURI

    2010-01-01

    While the power insulated gate bipolar transistor (IGRT) is used in many applications, it is not well characterized under pulsed power conditions. This makes the IGBT difficult to model for solid state pulsed power applications. The Oziemkiewicz implementation of the Hefner model is utilized to simulate IGBTs in some circuit simulation software packages. However, the seventeen parameters necessary for the Oziemkiewicz implementation must be known for the conditions under which the device will be operating. Using both experimental and simulated data with a least squares curve fitting technique, the parameters necessary to model a given IGBT can be determined. This paper presents two sets of these seventeen parameters that correspond to two different models of power IGBTs. Specifically, these parameters correspond to voltages up to 3.5 kV, currents up to 750 A, and pulse widths up to 10 {micro}s. Additionally, comparisons of the experimental and simulated data will be presented.

  19. Sensor placement for calibration of spatially varying model parameters

    Science.gov (United States)

    Nath, Paromita; Hu, Zhen; Mahadevan, Sankaran

    2017-08-01

    This paper presents a sensor placement optimization framework for the calibration of spatially varying model parameters. To account for the randomness of the calibration parameters over space and across specimens, the spatially varying parameter is represented as a random field. Based on this representation, Bayesian calibration of spatially varying parameter is investigated. To reduce the required computational effort during Bayesian calibration, the original computer simulation model is substituted with Kriging surrogate models based on the singular value decomposition (SVD) of the model response and the Karhunen-Loeve expansion (KLE) of the spatially varying parameters. A sensor placement optimization problem is then formulated based on the Bayesian calibration to maximize the expected information gain measured by the expected Kullback-Leibler (K-L) divergence. The optimization problem needs to evaluate the expected K-L divergence repeatedly which requires repeated calibration of the spatially varying parameter, and this significantly increases the computational effort of solving the optimization problem. To overcome this challenge, an approximation for the posterior distribution is employed within the optimization problem to facilitate the identification of the optimal sensor locations using the simulated annealing algorithm. A heat transfer problem with spatially varying thermal conductivity is used to demonstrate the effectiveness of the proposed method.

  20. An improved solution of local window parameters setting for local singularity analysis based on Excel VBA batch processing technology

    Science.gov (United States)

    Zhang, Daojun; Cheng, Qiuming; Agterberg, Frits; Chen, Zhijun

    2016-03-01

    In this paper Excel VBA is used for batch calculation in Local Singularity Analysis (LSA), which is for the information extracting from different kinds of geoscience data. Capabilities and advantages of a new module called Batch Tool for Local Singularity Index Mapping (BTLSIM) are: (1) batch production of series of local singularity maps with different settings of local window size, shape and orientation parameters; (2) local parameter optimization based on statistical tests; and (3) provision of extra output layers describing how spatial changes induced by parameter optimization are related to spatial structure of the original input layers.

  1. Procedures for parameter estimates of computational models for localized failure

    NARCIS (Netherlands)

    Iacono, C.

    2007-01-01

    In the last years, many computational models have been developed for tensile fracture in concrete. However, their reliability is related to the correct estimate of the model parameters, not all directly measurable during laboratory tests. Hence, the development of inverse procedures is needed, that

  2. Geometry parameters for musculoskeletal modelling of the shoulder system

    NARCIS (Netherlands)

    Van der Helm, F C; Veeger, DirkJan (H. E. J.); Pronk, G M; Van der Woude, L H; Rozendal, R H

    A dynamical finite-element model of the shoulder mechanism consisting of thorax, clavicula, scapula and humerus is outlined. The parameters needed for the model are obtained in a cadaver experiment consisting of both shoulders of seven cadavers. In this paper, in particular, the derivation of

  3. SAC-SMA a priori parameter differences and their impact on distributed hydrologic model simulations

    Science.gov (United States)

    Zhang, Ziya; Koren, Victor; Reed, Seann; Smith, Michael; Zhang, Yu; Moreda, Fekadu; Cosgrove, Brian

    2012-02-01

    SummaryDeriving a priori gridded parameters is an important step in the development and deployment of an operational distributed hydrologic model. Accurate a priori parameters can reduce the manual calibration effort and/or speed up the automatic calibration process, reduce calibration uncertainty, and provide valuable information at ungauged locations. Underpinned by reasonable parameter data sets, distributed hydrologic modeling can help improve water resource and flood and flash flood forecasting capabilities. Initial efforts at the National Weather Service Office of Hydrologic Development (NWS OHD) to derive a priori gridded Sacramento Soil Moisture Accounting (SAC-SMA) model parameters for the conterminous United States (CONUS) were based on a relatively coarse resolution soils property database, the State Soil Geographic Database (STATSGO) (Soil Survey Staff, 2011) and on the assumption of uniform land use and land cover. In an effort to improve the parameters, subsequent work was performed to fully incorporate spatially variable land cover information into the parameter derivation process. Following that, finer-scale soils data (the county-level Soil Survey Geographic Database (SSURGO) ( Soil Survey Staff, 2011a,b), together with the use of variable land cover data, were used to derive a third set of CONUS, a priori gridded parameters. It is anticipated that the second and third parameter sets, which incorporate more physical data, will be more realistic and consistent. Here, we evaluate whether this is actually the case by intercomparing these three sets of a priori parameters along with their associated hydrologic simulations which were generated by applying the National Weather Service Hydrology Laboratory's Research Distributed Hydrologic Model (HL-RDHM) ( Koren et al., 2004) in a continuous fashion with an hourly time step. This model adopts a well-tested conceptual water balance model, SAC-SMA, applied on a regular spatial grid, and links to physically

  4. A software for parameter estimation in dynamic models

    Directory of Open Access Journals (Sweden)

    M. Yuceer

    2008-12-01

    Full Text Available A common problem in dynamic systems is to determine parameters in an equation used to represent experimental data. The goal is to determine the values of model parameters that provide the best fit to measured data, generally based on some type of least squares or maximum likelihood criterion. In the most general case, this requires the solution of a nonlinear and frequently non-convex optimization problem. Some of the available software lack in generality, while others do not provide ease of use. A user-interactive parameter estimation software was needed for identifying kinetic parameters. In this work we developed an integration based optimization approach to provide a solution to such problems. For easy implementation of the technique, a parameter estimation software (PARES has been developed in MATLAB environment. When tested with extensive example problems from literature, the suggested approach is proven to provide good agreement between predicted and observed data within relatively less computing time and iterations.

  5. Improving the realism of hydrologic model through multivariate parameter estimation

    Science.gov (United States)

    Rakovec, Oldrich; Kumar, Rohini; Attinger, Sabine; Samaniego, Luis

    2017-04-01

    Increased availability and quality of near real-time observations should improve understanding of predictive skills of hydrological models. Recent studies have shown the limited capability of river discharge data alone to adequately constrain different components of distributed model parameterizations. In this study, the GRACE satellite-based total water storage (TWS) anomaly is used to complement the discharge data with an aim to improve the fidelity of mesoscale hydrologic model (mHM) through multivariate parameter estimation. The study is conducted in 83 European basins covering a wide range of hydro-climatic regimes. The model parameterization complemented with the TWS anomalies leads to statistically significant improvements in (1) discharge simulations during low-flow period, and (2) evapotranspiration estimates which are evaluated against independent (FLUXNET) data. Overall, there is no significant deterioration in model performance for the discharge simulations when complemented by information from the TWS anomalies. However, considerable changes in the partitioning of precipitation into runoff components are noticed by in-/exclusion of TWS during the parameter estimation. A cross-validation test carried out to assess the transferability and robustness of the calibrated parameters to other locations further confirms the benefit of complementary TWS data. In particular, the evapotranspiration estimates show more robust performance when TWS data are incorporated during the parameter estimation, in comparison with the benchmark model constrained against discharge only. This study highlights the value for incorporating multiple data sources during parameter estimation to improve the overall realism of hydrologic model and its applications over large domains. Rakovec, O., Kumar, R., Attinger, S. and Samaniego, L. (2016): Improving the realism of hydrologic model functioning through multivariate parameter estimation. Water Resour. Res., 52, http://dx.doi.org/10

  6. Parameter Estimation for Traffic Noise Models Using a Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    Deok-Soon An

    2013-01-01

    Full Text Available A technique has been developed for predicting road traffic noise for environmental assessment, taking into account traffic volume as well as road surface conditions. The ASJ model (ASJ Prediction Model for Road Traffic Noise, 1999, which is based on the sound power level of the noise emitted by the interaction between the road surface and tires, employs regression models for two road surface types: dense-graded asphalt (DGA and permeable asphalt (PA. However, these models are not applicable to other types of road surfaces. Accordingly, this paper introduces a parameter estimation procedure for ASJ-based noise prediction models, utilizing a harmony search (HS algorithm. Traffic noise measurement data for four different vehicle types were used in the algorithm to determine the regression parameters for several road surface types. The parameters of the traffic noise prediction models were evaluated using another measurement set, and good agreement was observed between the predicted and measured sound power levels.

  7. Ground level enhancement (GLE) energy spectrum parameters model

    Science.gov (United States)

    Qin, G.; Wu, S.

    2017-12-01

    We study the ground level enhancement (GLE) events in solar cycle 23 with the four energy spectra parameters, the normalization parameter C, low-energy power-law slope γ 1, high-energy power-law slope γ 2, and break energy E0, obtained by Mewaldt et al. 2012 who fit the observations to the double power-law equation. we divide the GLEs into two groups, one with strong acceleration by interplanetary (IP) shocks and another one without strong acceleration according to the condition of solar eruptions. We next fit the four parameters with solar event conditions to get models of the parameters for the two groups of GLEs separately. So that we would establish a model of energy spectrum for GLEs for the future space weather prediction.

  8. Determination of appropriate models and parameters for premixing calculations

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ik-Kyu; Kim, Jong-Hwan; Min, Beong-Tae; Hong, Seong-Wan

    2008-03-15

    The purpose of the present work is to use experiments that have been performed at Forschungszentrum Karlsruhe during about the last ten years for determining the most appropriate models and parameters for premixing calculations. The results of a QUEOS experiment are used to fix the parameters concerning heat transfer. The QUEOS experiments are especially suited for this purpose as they have been performed with small hot solid spheres. Therefore the area of heat exchange is known. With the heat transfer parameters fixed in this way, a PREMIX experiment is recalculated. These experiments have been performed with molten alumina (Al{sub 2}O{sub 3}) as a simulant of corium. Its initial temperature is 2600 K. With these experiments the models and parameters for jet and drop break-up are tested.

  9. On Approaches to Analyze the Sensitivity of Simulated Hydrologic Fluxes to Model Parameters in the Community Land Model

    Directory of Open Access Journals (Sweden)

    Jie Bao

    2015-12-01

    Full Text Available Effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash–Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA approaches, including analysis of variance based on the generalized linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.

  10. An intelligent diagnosis model based on rough set theory

    Science.gov (United States)

    Li, Ze; Huang, Hong-Xing; Zheng, Ye-Lu; Wang, Zhou-Yuan

    2013-03-01

    Along with the popularity of computer and rapid development of information technology, how to increase the accuracy of the agricultural diagnosis becomes a difficult problem of popularizing the agricultural expert system. Analyzing existing research, baseing on the knowledge acquisition technology of rough set theory, towards great sample data, we put forward a intelligent diagnosis model. Extract rough set decision table from the samples property, use decision table to categorize the inference relation, acquire property rules related to inference diagnosis, through the means of rough set knowledge reasoning algorithm to realize intelligent diagnosis. Finally, we validate this diagnosis model by experiments. Introduce the rough set theory to provide the agricultural expert system of great sample data a effective diagnosis model.

  11. Soil-related Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    A. J. Smith

    2003-01-01

    This analysis is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the geologic repository at Yucca Mountain. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN biosphere model is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003 [163602]). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. ''The Biosphere Model Report'' (BSC 2003 [160699]) describes in detail the conceptual model as well as the mathematical model and its input parameters. The purpose of this analysis was to develop the biosphere model parameters needed to evaluate doses from pathways associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation and ash

  12. Parameter Estimation for Single Diode Models of Photovoltaic Modules

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Photovoltaic and Distributed Systems Integration Dept.

    2015-03-01

    Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.

  13. Modeling Chinese ionospheric layer parameters based on EOF analysis

    Science.gov (United States)

    Yu, You; Wan, Weixing

    2016-04-01

    Using 24-ionosonde observations in and around China during the 20th solar cycle, an assimilative model is constructed to map the ionospheric layer parameters (foF2, hmF2, M(3000)F2, and foE) over China based on empirical orthogonal function (EOF) analysis. First, we decompose the background maps from the International Reference Ionosphere model 2007 (IRI-07) into different EOF modes. The obtained EOF modes consist of two factors: the EOF patterns and the corresponding EOF amplitudes. These two factors individually reflect the spatial distributions (e.g., the latitudinal dependence such as the equatorial ionization anomaly structure and the longitude structure with east-west difference) and temporal variations on different time scales (e.g., solar cycle, annual, semiannual, and diurnal variations) of the layer parameters. Then, the EOF patterns and long-term observations of ionosondes are assimilated to get the observed EOF amplitudes, which are further used to construct the Chinese Ionospheric Maps (CIMs) of the layer parameters. In contrast with the IRI-07 model, the mapped CIMs successfully capture the inherent temporal and spatial variations of the ionospheric layer parameters. Finally, comparison of the modeled (EOF and IRI-07 model) and observed values reveals that the EOF model reproduces the observation with smaller root-mean-square errors and higher linear correlation co- efficients. In addition, IRI discrepancy at the low latitude especially for foF2 is effectively removed by EOF model.

  14. Parameters and variables appearing in repository design models

    International Nuclear Information System (INIS)

    Curtis, R.H.; Wart, R.J.

    1983-12-01

    This report defines the parameters and variables appearing in repository design models and presents typical values and ranges of values of each. Areas covered by this report include thermal, geomechanical, and coupled stress and flow analyses in rock. Particular emphasis is given to conductivity, radiation, and convection parameters for thermal analysis and elastic constants, failure criteria, creep laws, and joint properties for geomechanical analysis. The data in this report were compiled to help guide the selection of values of parameters and variables to be used in code benchmarking. 102 references, 33 figures, 51 tables

  15. A lumped parameter, low dimension model of heat exchanger

    International Nuclear Information System (INIS)

    Kanoh, Hideaki; Furushoo, Junji; Masubuchi, Masami

    1980-01-01

    This paper reports on the results of investigation of the distributed parameter model, the difference model, and the model of the method of weighted residuals for heat exchangers. By the method of weighted residuals (MWR), the opposite flow heat exchanger system is approximated by low dimension, lumped parameter model. By assuming constant specific heat, constant density, the same form of tube cross-section, the same form of the surface of heat exchange, uniform flow velocity, the linear relation of heat transfer to flow velocity, liquid heat carrier, and the thermal insulation of liquid from outside, fundamental equations are obtained. The experimental apparatus was made of acrylic resin. The response of the temperature at the exit of first liquid to the variation of the flow rate of second liquid was measured and compared with the models. The MWR model shows good approximation for the low frequency region, and as the number of division increases, good approximation spreads to higher frequency region. (Kato, T.)

  16. Han's model parameters for microalgae grown under intermittent illumination: Determined using particle swarm optimization.

    Science.gov (United States)

    Pozzobon, Victor; Perre, Patrick

    2018-01-21

    This work provides a model and the associated set of parameters allowing for microalgae population growth computation under intermittent lightning. Han's model is coupled with a simple microalgae growth model to yield a relationship between illumination and population growth. The model parameters were obtained by fitting a dataset available in literature using Particle Swarm Optimization method. In their work, authors grew microalgae in excess of nutrients under flashing conditions. Light/dark cycles used for these experimentations are quite close to those found in photobioreactor, i.e. ranging from several seconds to one minute. In this work, in addition to producing the set of parameters, Particle Swarm Optimization robustness was assessed. To do so, two different swarm initialization techniques were used, i.e. uniform and random distribution throughout the search-space. Both yielded the same results. In addition, swarm distribution analysis reveals that the swarm converges to a unique minimum. Thus, the produced set of parameters can be trustfully used to link light intensity to population growth rate. Furthermore, the set is capable to describe photodamages effects on population growth. Hence, accounting for light overexposure effect on algal growth. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. A new method for determination of parameters in sewer pollutant transformation process model.

    Science.gov (United States)

    Jiang, F; Leung, H W D; Li, S Y; Lin, G S; Chen, G H

    2007-11-01

    Understanding pollutant transformation in sewers is important in controlling odor emission from pressure mains as well as in assessing organic pollutant removal capacity of gravity sewers. Sewer process models have thus been developed to quantify the pollutant transformation processes under various sewer conditions. The quantification largely depends on model parameter values, in particular the kinetic and stoichiometric parameters related to microbial activities. The current approaches not only involve a large amount of experimental work but also may induce significant errors when microbial reactions cannot be differentiated effectively during the experiments. Therefore, this study is aimed at developing a new method that can reduce experimental work significantly. The proposed method utilizes a genetic algorithm (GA) to enable the determination with a single set of batch experiments. To study the feasibility of the proposed method, a set of 72-hr batch experiments was first conducted for determining the parameters of a sewer model developed in this study, which adopted a full version of the International Water Association (IWA) Activated Sludge Model No. 3 (ASM3) to describe the microbial activities in sewers. The results were then verified with two different sets of the batch experiments. Furthermore, dynamic variation data of dissolved oxygen level were collected at the outlet of a 1.5-km gravity sewer to validate the determined parameters. All the results showed that the proposed parameter determination method is effective.

  18. Control of the SCOLE configuration using distributed parameter models

    Science.gov (United States)

    Hsiao, Min-Hung; Huang, Jen-Kuang

    1994-01-01

    A continuum model for the SCOLE configuration has been derived using transfer matrices. Controller designs for distributed parameter systems have been analyzed. Pole-assignment controller design is considered easy to implement but stability is not guaranteed. An explicit transfer function of dynamic controllers has been obtained and no model reduction is required before the controller is realized. One specific LQG controller for continuum models had been derived, but other optimal controllers for more general performances need to be studied.

  19. Benchmarking of protein descriptor sets in proteochemometric modeling (part 2): modeling performance of 13 amino acid descriptor sets

    Science.gov (United States)

    2013-01-01

    Background While a large body of work exists on comparing and benchmarking descriptors of molecular structures, a similar comparison of protein descriptor sets is lacking. Hence, in the current work a total of 13 amino acid descriptor sets have been benchmarked with respect to their ability of establishing bioactivity models. The descriptor sets included in the study are Z-scales (3 variants), VHSE, T-scales, ST-scales, MS-WHIM, FASGAI, BLOSUM, a novel protein descriptor set (termed ProtFP (4 variants)), and in addition we created and benchmarked three pairs of descriptor combinations. Prediction performance was evaluated in seven structure-activity benchmarks which comprise Angiotensin Converting Enzyme (ACE) dipeptidic inhibitor data, and three proteochemometric data sets, namely (1) GPCR ligands modeled against a GPCR panel, (2) enzyme inhibitors (NNRTIs) with associated bioactivities against a set of HIV enzyme mutants, and (3) enzyme inhibitors (PIs) with associated bioactivities on a large set of HIV enzyme mutants. Results The amino acid descriptor sets compared here show similar performance (set differences ( > 0.3 log units RMSE difference and >0.7 difference in MCC). Combining different descriptor sets generally leads to better modeling performance than utilizing individual sets. The best performers were Z-scales (3) combined with ProtFP (Feature), or Z-Scales (3) combined with an average Z-Scale value for each target, while ProtFP (PCA8), ST-Scales, and ProtFP (Feature) rank last. Conclusions While amino acid descriptor sets capture different aspects of amino acids their ability to be used for bioactivity modeling is still – on average – surprisingly similar. Still, combining sets describing complementary information consistently leads to small but consistent improvement in modeling performance (average MCC 0.01 better, average RMSE 0.01 log units lower). Finally, performance differences exist between the targets compared thereby underlining that

  20. Robust non-rigid point set registration using student's-t mixture model.

    Directory of Open Access Journals (Sweden)

    Zhiyong Zhou

    Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.

  1. Modelling of intermittent microwave convective drying: parameter sensitivity

    Directory of Open Access Journals (Sweden)

    Zhang Zhijun

    2017-06-01

    Full Text Available The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  2. Modelling of intermittent microwave convective drying: parameter sensitivity

    Science.gov (United States)

    Zhang, Zhijun; Qin, Wenchao; Shi, Bin; Gao, Jingxin; Zhang, Shiwei

    2017-06-01

    The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  3. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    Science.gov (United States)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  4. Application of acid whey and set milk to marinate beef with reference to quality parameters and product safety.

    Science.gov (United States)

    Wójciak, Karolina M; Krajmas, Paweł; Solska, Elżbieta; Dolatowski, Zbigniew J

    2015-01-01

    The aim of the study was to evaluate the potential of acid whey and set milk as a marinade in the traditional production of fermented eye round. Studies involved assaying pH value, water activity (aw), oxidation-reduction potential and TBARS value, colour parameters in CIE system (L*, a*, b*), assaying the number of lactic acid bacteria and certain pathogenic bacteria after ripening process and after 60-day storing in cold storage. Sensory analysis and analysis of the fatty acids profile were performed after completion of the ripening process. Analysis of pH value in the products revealed that application of acid whey to marinate beef resulted in increased acidity of ripening eye round (5.14). The highest value of the colour parameter a* after ripening process and during storage was observed in sample AW (12.76 and 10.07 respectively), the lowest on the other hand was observed in sample SM (10.06 and 7.88 respectively). The content of polyunsaturated fatty acids (PUFA) was higher in eye round marinated in acid whey by approx. 4% in comparison to other samples. Application of acid whey to marinade beef resulted in increased share of red colour in general colour tone as well as increased oxidative stability of the product during storage. It also increased the content of polyunsaturated fatty acids (PUFA) in the product. All model products had high content of lactic acid bacteria and there were no pathogenic bacteria such as: L. monocytogenes, Y. enterocolitica, S. aureus, Clostridium sp.

  5. Assessment of Lumped-Parameter Models for Rigid Footings

    DEFF Research Database (Denmark)

    Andersen, Lars

    2010-01-01

    The quality of consistent lumped-parameter models of rigid footings is examined. Emphasis is put on the maximum response during excitation and the geometrical damping related to free vibrations. The optimal order of a lumped-parameter model is determined for each degree of freedom, i.e. horizontal...... and vertical translations as well as torsion and rocking, and the necessity of coupling between horizontal sliding and rocking is discussed. Most of the analyses are carried out for hexagonal footings; but in order to generalise the conclusions to a broader variety of footings, comparisons are made...... with the response of circular and square foundations....

  6. MODIS/Terra+Aqua BRDF/Albedo Model Parameters 16-Day L3 Global 1km SIN Grid V005

    Data.gov (United States)

    National Aeronautics and Space Administration — The MODerate-resolution Imaging Spectroradiometer (MODIS) BRDF/Albedo Model Parameters product (MCD43B1) contains three-dimensional (3D) data sets providing users...

  7. MODIS/Terra+Aqua BRDF/Albedo Model Parameters 16-Day L3 Global 500m SIN Grid V005

    Data.gov (United States)

    National Aeronautics and Space Administration — The MODerate-resolution Imaging Spectroradiometer (MODIS) BRDF/Albedo Model Parameters product (MCD43A1) contains three-dimensional (3D) data sets providing users...

  8. MODIS/Terra+Aqua BRDF/Albedo Model Parameters Daily L3 Global 0.05Deg CMG V006

    Data.gov (United States)

    National Aeronautics and Space Administration — The MODIS MCD43C1 Version 6 Bidirectional reflectance distribution function and Albedo (BRDF/Albedo) Model Parameters data set is a 5600 meter daily 16-day product....

  9. MODIS/Terra+Aqua BRDF/Albedo Model Parameters Daily L3 Global - 500m V006

    Data.gov (United States)

    National Aeronautics and Space Administration — The MODIS MCD43A1 Version 6 Bidirectional reflectance distribution function and Albedo (BRDF/Albedo) Model Parameters data set is a 500 meter daily 16-day product....

  10. Ab initio localized basis set study of structural parameters and elastic properties of HfO{sub 2} polymorphs

    Energy Technology Data Exchange (ETDEWEB)

    Caravaca, M A [Facultad de Ingenieria, Universidad Nacional del Nordeste, Avenida Las Heras 727, 3500-Resistencia (Argentina); Casali, R A [Facultad de Ciencias Exactas y Naturales y Agrimensura, Universidad Nacional del Nordeste, Avenida Libertad, 5600-Corrientes (Argentina)

    2005-09-21

    The SIESTA approach based on pseudopotentials and a localized basis set is used to calculate the electronic, elastic and equilibrium properties of P 2{sub 1}/c, Pbca, Pnma, Fm3m, P4{sub 2}nmc and Pa3 phases of HfO{sub 2}. Using separable Troullier-Martins norm-conserving pseudopotentials which include partial core corrections for Hf, we tested important physical properties as a function of the basis set size, grid size and cut-off ratio of the pseudo-atomic orbitals (PAOs). We found that calculations in this oxide with the LDA approach and using a minimal basis set (simple zeta, SZ) improve calculated phase transition pressures with respect to the double-zeta basis set and LDA (DZ-LDA), and show similar accuracy to that determined with the PPPW and GGA approach. Still, the equilibrium volumes and structural properties calculated with SZ-LDA compare better with experiments than the GGA approach. The bandgaps and elastic and structural properties calculated with DZ-LDA are accurate in agreement with previous state of the art ab initio calculations and experimental evidence and cannot be improved with a polarized basis set. These calculated properties show low sensitivity to the PAO localization parameter range between 40 and 100 meV. However, this is not true for the relative energy, which improves upon decrease of the mentioned parameter. We found a non-linear behaviour in the lattice parameters with pressure in the P 2{sub 1}/c phase, showing a discontinuity of the derivative of the a lattice parameter with respect to external pressure, as found in experiments. The common enthalpy values calculated with the minimal basis set give pressure transitions of 3.3 and 10.8?GPa for P2{sub 1}/c {yields} Pbca and Pbca {yields} Pnma, respectively, in accordance with different high pressure experimental values.

  11. Ab initio localized basis set study of structural parameters and elastic properties of HfO2 polymorphs

    International Nuclear Information System (INIS)

    Caravaca, M A; Casali, R A

    2005-01-01

    The SIESTA approach based on pseudopotentials and a localized basis set is used to calculate the electronic, elastic and equilibrium properties of P 2 1 /c, Pbca, Pnma, Fm3m, P4 2 nmc and Pa3 phases of HfO 2 . Using separable Troullier-Martins norm-conserving pseudopotentials which include partial core corrections for Hf, we tested important physical properties as a function of the basis set size, grid size and cut-off ratio of the pseudo-atomic orbitals (PAOs). We found that calculations in this oxide with the LDA approach and using a minimal basis set (simple zeta, SZ) improve calculated phase transition pressures with respect to the double-zeta basis set and LDA (DZ-LDA), and show similar accuracy to that determined with the PPPW and GGA approach. Still, the equilibrium volumes and structural properties calculated with SZ-LDA compare better with experiments than the GGA approach. The bandgaps and elastic and structural properties calculated with DZ-LDA are accurate in agreement with previous state of the art ab initio calculations and experimental evidence and cannot be improved with a polarized basis set. These calculated properties show low sensitivity to the PAO localization parameter range between 40 and 100 meV. However, this is not true for the relative energy, which improves upon decrease of the mentioned parameter. We found a non-linear behaviour in the lattice parameters with pressure in the P 2 1 /c phase, showing a discontinuity of the derivative of the a lattice parameter with respect to external pressure, as found in experiments. The common enthalpy values calculated with the minimal basis set give pressure transitions of 3.3 and 10.8?GPa for P2 1 /c → Pbca and Pbca → Pnma, respectively, in accordance with different high pressure experimental values

  12. Joint state and parameter estimation for a class of cascade systems: Application to a hemodynamic model

    KAUST Repository

    Zayane, Chadia

    2014-06-01

    In this paper, we address a special case of state and parameter estimation, where the system can be put on a cascade form allowing to estimate the state components and the set of unknown parameters separately. Inspired by the nonlinear Balloon hemodynamic model for functional Magnetic Resonance Imaging problem, we propose a hierarchical approach. The system is divided into two subsystems in cascade. The state and input are first estimated from a noisy measured signal using an adaptive observer. The obtained input is then used to estimate the parameters of a linear system using the modulating functions method. Some numerical results are presented to illustrate the efficiency of the proposed method.

  13. A Consistent Methodology Based Parameter Estimation for a Lactic Acid Bacteria Fermentation Model

    DEFF Research Database (Denmark)

    Spann, Robert; Roca, Christophe; Kold, David

    2017-01-01

    Lactic acid bacteria are used in many industrial applications, e.g. as starter cultures in the dairy industry or as probiotics, and research on their cell production is highly required. A first principles kinetic model was developed to describe and understand the biological, physical, and chemical...... mechanisms in a lactic acid bacteria fermentation. We present here a consistent approach for a methodology based parameter estimation for a lactic acid fermentation. In the beginning, just an initial knowledge based guess of parameters was available and an initial parameter estimation of the complete set...

  14. Parameter estimation in nonlinear models for pesticide degradation

    International Nuclear Information System (INIS)

    Richter, O.; Pestemer, W.; Bunte, D.; Diekkrueger, B.

    1991-01-01

    A wide class of environmental transfer models is formulated as ordinary or partial differential equations. With the availability of fast computers, the numerical solution of large systems became feasible. The main difficulty in performing a realistic and convincing simulation of the fate of a substance in the biosphere is not the implementation of numerical techniques but rather the incomplete data basis for parameter estimation. Parameter estimation is a synonym for statistical and numerical procedures to derive reasonable numerical values for model parameters from data. The classical method is the familiar linear regression technique which dates back to the 18th century. Because it is easy to handle, linear regression has long been established as a convenient tool for analysing relationships. However, the wide use of linear regression has led to an overemphasis of linear relationships. In nature, most relationships are nonlinear and linearization often gives a poor approximation of reality. Furthermore, pure regression models are not capable to map the dynamics of a process. Therefore, realistic models involve the evolution in time (and space). This leads in a natural way to the formulation of differential equations. To establish the link between data and dynamical models, numerical advanced parameter identification methods have been developed in recent years. This paper demonstrates the application of these techniques to estimation problems in the field of pesticide dynamics. (7 refs., 5 figs., 2 tabs.)

  15. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. Wasiolek

    2006-01-01

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This report is concerned primarily with the

  16. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2006-06-05

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This

  17. The Early Eocene equable climate problem: can perturbations of climate model parameters identify possible solutions?

    Science.gov (United States)

    Sagoo, Navjit; Valdes, Paul; Flecker, Rachel; Gregoire, Lauren J

    2013-10-28

    Geological data for the Early Eocene (56-47.8 Ma) indicate extensive global warming, with very warm temperatures at both poles. However, despite numerous attempts to simulate this warmth, there are remarkable data-model differences in the prediction of these polar surface temperatures, resulting in the so-called 'equable climate problem'. In this paper, for the first time an ensemble with a perturbed climate-sensitive model parameters approach has been applied to modelling the Early Eocene climate. We performed more than 100 simulations with perturbed physics parameters, and identified two simulations that have an optimal fit with the proxy data. We have simulated the warmth of the Early Eocene at 560 ppmv CO2, which is a much lower CO2 level than many other models. We investigate the changes in atmospheric circulation, cloud properties and ocean circulation that are common to these simulations and how they differ from the remaining simulations in order to understand what mechanisms contribute to the polar warming. The parameter set from one of the optimal Early Eocene simulations also produces a favourable fit for the last glacial maximum boundary climate and outperforms the control parameter set for the present day. Although this does not 'prove' that this model is correct, it is very encouraging that there is a parameter set that creates a climate model able to simulate well very different palaeoclimates and the present-day climate. Interestingly, to achieve the great warmth of the Early Eocene this version of the model does not have a strong future climate change Charney climate sensitivity. It produces a Charney climate sensitivity of 2.7(°)C, whereas the mean value of the 18 models in the IPCC Fourth Assessment Report (AR4) is 3.26(°)C±0.69(°)C. Thus, this value is within the range and below the mean of the models included in the AR4.

  18. Diverse Data Sets Can Yield Reliable Information through Mechanistic Modeling: Salicylic Acid Clearance

    Science.gov (United States)

    Raymond, G. M.; Bassingthwaighte, J. B.

    2016-01-01

    This is a practical example of a powerful research strategy: putting together data from studies covering a diversity of conditions can yield a scientifically sound grasp of the phenomenon when the individual observations failed to provide definitive understanding. The rationale is that defining a realistic, quantitative, explanatory hypothesis for the whole set of studies, brings about a “consilience” of the often competing hypotheses considered for individual data sets. An internally consistent conjecture linking multiple data sets simultaneously provides stronger evidence on the characteristics of a system than does analysis of individual data sets limited to narrow ranges of conditions. Our example examines three very different data sets on the clearance of salicylic acid from humans: a high concentration set from aspirin overdoses; a set with medium concentrations from a research study on the influences of the route of administration and of sex on the clearance kinetics, and a set on low dose aspirin for cardiovascular health. Three models were tested: (1) a first order reaction, (2) a Michaelis-Menten (M-M) approach, and (3) an enzyme kinetic model with forward and backward reactions. The reaction rates found from model 1 were distinctly different for the three data sets, having no commonality. The M-M model 2 fitted each of the three data sets but gave a reliable estimates of the Michaelis constant only for the medium level data (Km = 24±5.4 mg/L); analyzing the three data sets together with model 2 gave Km = 18±2.6 mg/L. (Estimating parameters using larger numbers of data points in an optimization increases the degrees of freedom, constraining the range of the estimates). Using the enzyme kinetic model (3) increased the number of free parameters but nevertheless improved the goodness of fit to the combined data sets, giving tighter constraints, and a lower estimated Km = 14.6±2.9 mg/L, demonstrating that fitting diverse data sets with a single model

  19. Diverse Data Sets Can Yield Reliable Information through Mechanistic Modeling: Salicylic Acid Clearance.

    Science.gov (United States)

    Raymond, G M; Bassingthwaighte, J B

    This is a practical example of a powerful research strategy: putting together data from studies covering a diversity of conditions can yield a scientifically sound grasp of the phenomenon when the individual observations failed to provide definitive understanding. The rationale is that defining a realistic, quantitative, explanatory hypothesis for the whole set of studies, brings about a "consilience" of the often competing hypotheses considered for individual data sets. An internally consistent conjecture linking multiple data sets simultaneously provides stronger evidence on the characteristics of a system than does analysis of individual data sets limited to narrow ranges of conditions. Our example examines three very different data sets on the clearance of salicylic acid from humans: a high concentration set from aspirin overdoses; a set with medium concentrations from a research study on the influences of the route of administration and of sex on the clearance kinetics, and a set on low dose aspirin for cardiovascular health. Three models were tested: (1) a first order reaction, (2) a Michaelis-Menten (M-M) approach, and (3) an enzyme kinetic model with forward and backward reactions. The reaction rates found from model 1 were distinctly different for the three data sets, having no commonality. The M-M model 2 fitted each of the three data sets but gave a reliable estimates of the Michaelis constant only for the medium level data (K m = 24±5.4 mg/L); analyzing the three data sets together with model 2 gave K m = 18±2.6 mg/L. (Estimating parameters using larger numbers of data points in an optimization increases the degrees of freedom, constraining the range of the estimates). Using the enzyme kinetic model (3) increased the number of free parameters but nevertheless improved the goodness of fit to the combined data sets, giving tighter constraints, and a lower estimated K m = 14.6±2.9 mg/L, demonstrating that fitting diverse data sets with a single model

  20. Optimisation-Based Solution Methods for Set Partitioning Models

    DEFF Research Database (Denmark)

    Rasmussen, Matias Sevel

    The scheduling of crew, i.e. the construction of work schedules for crew members, is often not a trivial task, but a complex puzzle. The task is complicated by rules, restrictions, and preferences. Therefore, manual solutions as well as solutions from standard software packages are not always su......_cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown...

  1. Identifiability and error minimization of receptor model parameters with PET

    International Nuclear Information System (INIS)

    Delforge, J.; Syrota, A.; Mazoyer, B.M.

    1989-01-01

    The identifiability problem and the general framework for experimental design optimization are presented. The methodology is applied to the problem of the receptor-ligand model parameter estimation with dynamic positron emission tomography data. The first attempts to identify the model parameters from data obtained with a single tracer injection led to disappointing numerical results. The possibility of improving parameter estimation using a new experimental design combining an injection of the labelled ligand and an injection of the cold ligand (displacement experiment) has been investigated. However, this second protocol led to two very different numerical solutions and it was necessary to demonstrate which solution was biologically valid. This has been possible by using a third protocol including both a displacement and a co-injection experiment. (authors). 16 refs.; 14 figs

  2. X-Parameter Based Modelling of Polar Modulated Power Amplifiers

    DEFF Research Database (Denmark)

    Wang, Yelin; Nielsen, Troels Studsgaard; Sira, Daniel

    2013-01-01

    X-parameters are developed as an extension of S-parameters capable of modelling non-linear devices driven by large signals. They are suitable for devices having only radio frequency (RF) and DC ports. In a polar power amplifier (PA), phase and envelope of the input modulated signal are applied...... at separate ports and the envelope port is neither an RF nor a DC port. As a result, X-parameters may fail to characterise the effect of the envelope port excitation and consequently the polar PA. This study introduces a solution to the problem for a commercial polar PA. In this solution, the RF-phase path...... PA for simulations. The simulated error vector magnitude (EVM) and adjacent channel power ratio (ACPR) were compared with the measured data to validate the model. The maximum differences between the simulated and measured EVM and ACPR are less than 2% point and 3 dB, respectively....

  3. Joint Dynamics Modeling and Parameter Identification for Space Robot Applications

    Directory of Open Access Journals (Sweden)

    Adenilson R. da Silva

    2007-01-01

    Full Text Available Long-term mission identification and model validation for in-flight manipulator control system in almost zero gravity with hostile space environment are extremely important for robotic applications. In this paper, a robot joint mathematical model is developed where several nonlinearities have been taken into account. In order to identify all the required system parameters, an integrated identification strategy is derived. This strategy makes use of a robust version of least-squares procedure (LS for getting the initial conditions and a general nonlinear optimization method (MCS—multilevel coordinate search—algorithm to estimate the nonlinear parameters. The approach is applied to the intelligent robot joint (IRJ experiment that was developed at DLR for utilization opportunity on the International Space Station (ISS. The results using real and simulated measurements have shown that the developed algorithm and strategy have remarkable features in identifying all the parameters with good accuracy.

  4. Effect of software version and parameter settings on the marginal and internal adaptation of crowns fabricated with the CAD/CAM system.

    Science.gov (United States)

    Shim, Ji Suk; Lee, Jin Sook; Lee, Jeong Yol; Choi, Yeon Jo; Shin, Sang Wan; Ryu, Jae Jun

    2015-10-01

    This study investigated the marginal and internal adaptation of individual dental crowns fabricated using a CAD/CAM system (Sirona's BlueCam), also evaluating the effect of the software version used, and the specific parameter settings in the adaptation of crowns. Forty digital impressions of a master model previously prepared were acquired using an intraoral scanner and divided into four groups based on the software version and on the spacer settings used. The versions 3.8 and 4.2 of the software were used, and the spacer parameter was set at either 40 μm or 80 μm. The marginal and internal fit of the crowns were measured using the replica technique, which uses a low viscosity silicone material that simulates the thickness of the cement layer. The data were analyzed using a Friedman two-way analysis of variance (ANOVA) and paired t-tests with significance level set at psoftware version (psoftware showed a better fit than those designed with the version 3.8, particularly in the axial wall and in the inner margin. The spacer parameter was more accurately represented in the version 4.2 of the software than in the version 3.8. In addition, the use of the version 4.2 of the software combined with the spacer parameter set at 80 μm showed the least variation. On the other hand, the outer margin was not affected by the variables. Compared to the version 3.8 of the software, the version 4.2 can be recommended for the fabrication of well-fitting crown restorations, and for the appropriate regulation of the spacer parameter.

  5. Parameter sensitivity analysis of a lumped-parameter model of a chain of lymphangions in series.

    Science.gov (United States)

    Jamalian, Samira; Bertram, Christopher D; Richardson, William J; Moore, James E

    2013-12-01

    Any disruption of the lymphatic system due to trauma or injury can lead to edema. There is no effective cure for lymphedema, partly because predictive knowledge of lymphatic system reactions to interventions is lacking. A well-developed model of the system could greatly improve our understanding of its function. Lymphangions, defined as the vessel segment between two valves, are the individual pumping units. Based on our previous lumped-parameter model of a chain of lymphangions, this study aimed to identify the parameters that affect the system output the most using a sensitivity analysis. The system was highly sensitive to minimum valve resistance, such that variations in this parameter caused an order-of-magnitude change in time-average flow rate for certain values of imposed pressure difference. Average flow rate doubled when contraction frequency was increased within its physiological range. Optimum lymphangion length was found to be some 13-14.5 diameters. A peak of time-average flow rate occurred when transmural pressure was such that the pressure-diameter loop for active contractions was centered near maximum passive vessel compliance. Increasing the number of lymphangions in the chain improved the pumping in the presence of larger adverse pressure differences. For a given pressure difference, the optimal number of lymphangions increased with the total vessel length. These results indicate that further experiments to estimate valve resistance more accurately are necessary. The existence of an optimal value of transmural pressure may provide additional guidelines for increasing pumping in areas affected by edema.

  6. Links between the charge model and bonded parameter force constants in biomolecular force fields

    Science.gov (United States)

    Cerutti, David S.; Debiec, Karl T.; Case, David A.; Chong, Lillian T.

    2017-10-01

    The ff15ipq protein force field is a fixed charge model built by automated tools based on the two charge sets of the implicitly polarized charge method: one set (appropriate for vacuum) for deriving bonded parameters and the other (appropriate for aqueous solution) for running simulations. The duality is intended to treat water-induced electronic polarization with an understanding that fitting data for bonded parameters will come from quantum mechanical calculations in the gas phase. In this study, we compare ff15ipq to two alternatives produced with the same fitting software and a further expanded data set but following more conventional methods for tailoring bonded parameters (harmonic angle terms and torsion potentials) to the charge model. First, ff15ipq-Qsolv derives bonded parameters in the context of the ff15ipq solution phase charge set. Second, ff15ipq-Vac takes ff15ipq's bonded parameters and runs simulations with the vacuum phase charge set used to derive those parameters. The IPolQ charge model and associated protocol for deriving bonded parameters are shown to be an incremental improvement over protocols that do not account for the material phases of each source of their fitting data. Both force fields incorporating the polarized charge set depict stable globular proteins and have varying degrees of success modeling the metastability of short (5-19 residues) peptides. In this particular case, ff15ipq-Qsolv increases stability in a number of α -helices, correctly obtaining 70% helical character in the K19 system at 275 K and showing appropriately diminishing content up to 325 K, but overestimating the helical fraction of AAQAA3 by 50% or more, forming long-lived α -helices in simulations of a β -hairpin, and increasing the likelihood that the disordered p53 N-terminal peptide will also form a helix. This may indicate a systematic bias imparted by the ff15ipq-Qsolv parameter development strategy, which has the hallmarks of strategies used to develop

  7. Utilising temperature differences as constraints for estimating parameters in a simple climate model

    International Nuclear Information System (INIS)

    Bodman, Roger W; Karoly, David J; Enting, Ian G

    2010-01-01

    Simple climate models can be used to estimate the global temperature response to increasing greenhouse gases. Changes in the energy balance of the global climate system are represented by equations that necessitate the use of uncertain parameters. The values of these parameters can be estimated from historical observations, model testing, and tuning to more complex models. Efforts have been made at estimating the possible ranges for these parameters. This study continues this process, but demonstrates two new constraints. Previous studies have shown that land-ocean temperature differences are only weakly correlated with global mean temperature for natural internal climate variations. Hence, these temperature differences provide additional information that can be used to help constrain model parameters. In addition, an ocean heat content ratio can also provide a further constraint. A pulse response technique was used to identify relative parameter sensitivity which confirmed the importance of climate sensitivity and ocean vertical diffusivity, but the land-ocean warming ratio and the land-ocean heat exchange coefficient were also found to be important. Experiments demonstrate the utility of the land-ocean temperature difference and ocean heat content ratio for setting parameter values. This work is based on investigations with MAGICC (Model for the Assessment of Greenhouse-gas Induced Climate Change) as the simple climate model.

  8. A framework for scalable parameter estimation of gene circuit models using structural information

    KAUST Repository

    Kuwahara, Hiroyuki

    2013-06-21

    Motivation: Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Results: Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. The Author 2013.

  9. Empirical validation data sets for double skin facade models

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Jensen, Rasmus Lund; Heiselberg, Per

    2008-01-01

    During recent years application of double skin facades (DSF) has greatly increased. However, successful application depends heavily on reliable and validated models for simulation of the DSF performance and this in turn requires access to high quality experimental data. Three sets of accurate...... empirical data for validation of DSF modeling with building simulation software were produced within the International Energy Agency (IEA) SHCTask 34 / ECBCS Annex 43. This paper describes the full-scale outdoor experimental test facility, the experimental set-up and the measurements procedure...

  10. Hair growth-promotion effects of different alternating current parameter settings are mediated by the activation of Wnt/β-catenin and MAPK pathway.

    Science.gov (United States)

    Sohn, Ki Min; Jeong, Kwan Ho; Kim, Jung Eun; Park, Young Min; Kang, Hoon

    2015-12-01

    Electrical stimulation is being used in variable skin therapeutic conditions. There have been clinical studies demonstrating the positive effect of electrical stimuli on hair regrowth. However, the underlying exact mechanism and optimal parameter settings are not clarified yet. To investigate the effects of different parameter settings of electrical stimuli on hair growth by examining changes in human dermal papilla cells (hDPCs) in vitro and by observing molecular changes in animal tissue. In vitro, cultured hDPCs were electrically stimulated with different parameter settings at alternating current (AC). Cell proliferation was measured by MTT assay. The Ki67 expression was measured by immunofluorescence. Hair growth-related gene expressions were measured by RT-PCR. In animal model, different parameter settings of AC were applied to the shaved dorsal skin of rabbit for 8 weeks. Expression of hair-related genes in the skin of rabbit was examined by RT-PCR. At low voltage power (3.5 V) and low frequency (1 or 2 MHz) with AC, in vitro proliferation of hDPCs was successfully induced. A significant increase in Wnt/β-catenin, Ki67, p-ERK and p-AKT expressions was observed under the aforementioned settings. In animal model, hair regrowth was observed in the entire stimulated areas under individual conditions. Expression of hair-related genes in the skin significantly increased on the 6th week of treatment. There are optimal conditions for electrical stimulated hair growth, and they might be different in the cells, animals and human tissues. Electrical stimuli induce mechanisms such as the activation of Wnt/β-catenin and MAPK pathway in hair follicles. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Revised models and genetic parameter estimates for production and ...

    African Journals Online (AJOL)

    Genetic parameters for production and reproduction traits in the Elsenburg Dormer sheep stud were estimated using records of 11743 lambs born between 1943 and 2002. An animal model with direct and maternal additive, maternal permanent and temporary environmental effects was fitted for traits considered traits of the ...

  12. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  13. Comparison of parameter estimation algorithms in hydrological modelling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2006-01-01

    for these types of models, although at a more expensive computational cost. The main purpose of this study is to investigate the performance of a global and a local parameter optimization algorithm, respectively, the Shuffled Complex Evolution (SCE) algorithm and the gradient-based Gauss...

  14. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  15. Constraint on Parameters of Inverse Compton Scattering Model for ...

    Indian Academy of Sciences (India)

    J. Astrophys. Astr. (2011) 32, 299–300 c Indian Academy of Sciences. Constraint on Parameters of Inverse Compton Scattering Model for PSR B2319+60. H. G. Wang. ∗. & M. Lv. Center for Astrophysics,Guangzhou University, Guangzhou, China. ∗ e-mail: cosmic008@yahoo.com.cn. Abstract. Using the multifrequency radio ...

  16. Model calibration and parameter estimation for environmental and water resource systems

    CERN Document Server

    Sun, Ne-Zheng

    2015-01-01

    This three-part book provides a comprehensive and systematic introduction to the development of useful models for complex systems. Part 1 covers the classical inverse problem for parameter estimation in both deterministic and statistical frameworks, Part 2 is dedicated to system identification, hyperparameter estimation, and model dimension reduction, and Part 3 considers how to collect data and construct reliable models for prediction and decision-making. For the first time, topics such as multiscale inversion, stochastic field parameterization, level set method, machine learning, global sensitivity analysis, data assimilation, model uncertainty quantification, robust design, and goal-oriented modeling, are systematically described and summarized in a single book from the perspective of model inversion, and elucidated with numerical examples from environmental and water resources modeling. Readers of this book will not only learn basic concepts and methods for simple parameter estimation, but also get famili...

  17. Sensor set-up for wireless measurement of automotive rim and wheel parameters in laboratory conditions

    Science.gov (United States)

    Borecki, M.; Prus, P.; Korwin-Pawlowski, M. L.; Rychlik, A.; Kozubel, W.

    2017-08-01

    Modern rims and wheels are tested at the design and production stages. Tests can be performed in laboratory conditions and on the ride. In the laboratory, complex and costly equipment is used, as for example wheel balancers and impact testers. Modern wheel balancers are equipped with electronic and electro-mechanical units that enable touch-less measurement of dimensions, including precision measurement of radial and lateral wheel run-out, automatic positioning and application of the counterweights, and vehicle wheel set monitoring - tread wear, drift angles and run-out unbalance. Those tests are performed by on-wheel axis measurements with laser distance meters. The impact tester enables dropping of weights from a defined height onto a wheel. Test criteria are the loss of pressure of the tire and generation of cracks in the wheel without direct impact of the falling weights. In the present paper, a set up composed of three accelerometers, a temperature sensor and a pressure sensor is examined as the base of a wheel tester. The sensor set-up configuration, on-line diagnostic and signal transmission are discussed.

  18. Setting conservation management thresholds using a novel participatory modeling approach.

    Science.gov (United States)

    Addison, P F E; de Bie, K; Rumpff, L

    2015-10-01

    We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The

  19. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    Science.gov (United States)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  20. Integrating microbial diversity in soil carbon dynamic models parameters

    Science.gov (United States)

    Louis, Benjamin; Menasseri-Aubry, Safya; Leterme, Philippe; Maron, Pierre-Alain; Viaud, Valérie

    2015-04-01

    Faced with the numerous concerns about soil carbon dynamic, a large quantity of carbon dynamic models has been developed during the last century. These models are mainly in the form of deterministic compartment models with carbon fluxes between compartments represented by ordinary differential equations. Nowadays, lots of them consider the microbial biomass as a compartment of the soil organic matter (carbon quantity). But the amount of microbial carbon is rarely used in the differential equations of the models as a limiting factor. Additionally, microbial diversity and community composition are mostly missing, although last advances in soil microbial analytical methods during the two past decades have shown that these characteristics play also a significant role in soil carbon dynamic. As soil microorganisms are essential drivers of soil carbon dynamic, the question about explicitly integrating their role have become a key issue in soil carbon dynamic models development. Some interesting attempts can be found and are dominated by the incorporation of several compartments of different groups of microbial biomass in terms of functional traits and/or biogeochemical compositions to integrate microbial diversity. However, these models are basically heuristic models in the sense that they are used to test hypotheses through simulations. They have rarely been confronted to real data and thus cannot be used to predict realistic situations. The objective of this work was to empirically integrate microbial diversity in a simple model of carbon dynamic through statistical modelling of the model parameters. This work is based on available experimental results coming from a French National Research Agency program called DIMIMOS. Briefly, 13C-labelled wheat residue has been incorporated into soils with different pedological characteristics and land use history. Then, the soils have been incubated during 104 days and labelled and non-labelled CO2 fluxes have been measured at ten

  1. "Economic microscope": The agent-based model set as an instrument in an economic system research

    Science.gov (United States)

    Berg, D. B.; Zvereva, O. M.; Akenov, Serik

    2017-07-01

    To create a valid model of a social or economic system one must consider a lot of parameters, conditions and restrictions. Systems and, consequently, the corresponding models are proved to be very complicated. The problem of such system model engineering can't be solved only with mathematical methods usage. The decision could be found in computer simulation. Simulation does not reject mathematical methods, mathematical expressions could become the foundation for a computer model. In these materials the set of agent-based computer models is under discussion. All the set models simulate productive agents communications, but every model is geared towards the specific goal, and, thus, has its own algorithm and its own peculiarities. It is shown that computer simulation can discover new features of the agents' behavior that can not be obtained by analytical solvation of mathematical equations and thus plays the role of some kind of economic microscope.

  2. Numerical Parameter Optimization of the Ignition and Growth Model for HMX Based Plastic Bonded Explosives

    Science.gov (United States)

    Gambino, James; Tarver, Craig; Springer, H. Keo; White, Bradley; Fried, Laurence

    2017-06-01

    We present a novel method for optimizing parameters of the Ignition and Growth reactive flow (I&G) model for high explosives. The I&G model can yield accurate predictions of experimental observations. However, calibrating the model is a time-consuming task especially with multiple experiments. In this study, we couple the differential evolution global optimization algorithm to simulations of shock initiation experiments in the multi-physics code ALE3D. We develop parameter sets for HMX based explosives LX-07 and LX-10. The optimization finds the I&G model parameters that globally minimize the difference between calculated and experimental shock time of arrival at embedded pressure gauges. This work was performed under the auspices of the U.S. DOE by LLNL under contract DE-AC52-07NA27344. LLNS, LLC LLNL-ABS- 724898.

  3. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rasmuson; K. Rautenstrauch

    2004-09-14

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.

  4. A review of Environmental Impact Assessment parameters required for set up of a hydropower project

    Energy Technology Data Exchange (ETDEWEB)

    Roy, Pankaj Kumar; Mazumdar, Asis [Jadavpur Univ. (India). School of Water Resources Engineering

    2013-07-01

    Environmental Impact Assessment in general, hydro-meteorological conditions, topography, hydrology, water availability analysis of a river system, importance of hydropower and feasibility study of Environmental Impact assessment due to the construction of the hydropower plant have been discussed in this research work. The site selection is one of the major components so far the hydropower is concerned and also the minimum flow should have known to us so that the capacity of a hydropower plant can be predicted. The sustainable flow, which refers the flow is available throughout the year, has been calculated based on flow duration curve. This study highlights the environmental impact assessment particularly related to hydropower project. Here the study area a district town located in the eastern region of India on the banks of river Kosi has been considered. The historical rainfall and the river discharge data have been collected from various organizations. The stage-discharge correlation and hydrological parameters related to hydropower have been analyzed and also to discuss the review of environmental impact assessment in hydropower project. The EIA analysis can be also carried out by using fuzzy logic wherein the EIA parameters can be given different weight-age based on the various survey reports that have been carried out at different places at different time. Such analysis has also been provided below based on the various data obtained.

  5. Parameter setting for peak fitting method in XPS analysis of nitrogen in sewage sludge

    Science.gov (United States)

    Tang, Z. J.; Fang, P.; Huang, J. H.; Zhong, P. Y.

    2017-12-01

    Thermal decomposition method is regarded as an important route to treat increasing sewage sludge, while the high content of N causes serious nitrogen related problems, then figuring out the existing form and content of nitrogen of sewage sludge become essential. In this study, XPSpeak 4.1 was used to investigate the functional forms of nitrogen in sewage sludge, peak fitting method was adopted and the best-optimized parameters were determined. According to the result, the N1s spectra curve can be resolved into 5 peaks: pyridine-N (398.7±0.4eV), pyrrole-N(400.5±0.3eV), protein-N(400.4eV), ammonium-N(401.1±0.3eV) and nitrogen oxide-N(403.5±0.5eV). Based on the the experimental data obtained from elemental analysis and spectrophotometry method, the optimum parameters of curve fitting method were decided: background type: Tougaard, FWHM 1.2, 50% Lorentzian-Gaussian. XPS methods can be used as a practical tool to analysis the nitrogen functional groups of sewage sludge, which can reflect the real content of nitrogen of different forms.

  6. Estimating model parameters in nonautonomous chaotic systems using synchronization

    International Nuclear Information System (INIS)

    Yang, Xiaoli; Xu, Wei; Sun, Zhongkui

    2007-01-01

    In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation

  7. Soil-Related Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Smith, A. J.

    2004-01-01

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure was defined as AP-SIII.9Q, ''Scientific Analyses''. This

  8. Soil-Related Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    A. J. Smith

    2004-09-09

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure

  9. Neural Models: An Option to Estimate Seismic Parameters of Accelerograms

    Science.gov (United States)

    Alcántara, L.; García, S.; Ovando-Shelley, E.; Macías, M. A.

    2014-12-01

    Seismic instrumentation for recording strong earthquakes, in Mexico, goes back to the 60´s due the activities carried out by the Institute of Engineering at Universidad Nacional Autónoma de México. However, it was after the big earthquake of September 19, 1985 (M=8.1) when the project of seismic instrumentation assumes a great importance. Currently, strong ground motion networks have been installed for monitoring seismic activity mainly along the Mexican subduction zone and in Mexico City. Nevertheless, there are other major regions and cities that can be affected by strong earthquakes and have not yet begun their seismic instrumentation program or this is still in development.Because of described situation some relevant earthquakes (e.g. Huajuapan de León Oct 24, 1980 M=7.1, Tehuacán Jun 15, 1999 M=7 and Puerto Escondido Sep 30, 1999 M= 7.5) have not been registered properly in some cities, like Puebla and Oaxaca, and that were damaged during those earthquakes. Fortunately, the good maintenance work carried out in the seismic network has permitted the recording of an important number of small events in those cities. So in this research we present a methodology based on the use of neural networks to estimate significant duration and in some cases the response spectra for those seismic events. The neural model developed predicts significant duration in terms of magnitude, epicenter distance, focal depth and soil characterization. Additionally, for response spectra we used a vector of spectral accelerations. For training the model we selected a set of accelerogram records obtained from the small events recorded in the strong motion instruments installed in the cities of Puebla and Oaxaca. The final results show that neural networks as a soft computing tool that use a multi-layer feed-forward architecture provide good estimations of the target parameters and they also have a good predictive capacity to estimate strong ground motion duration and response spectra.

  10. The effect of various parameters of large scale radio propagation models on improving performance mobile communications

    Science.gov (United States)

    Pinem, M.; Fauzi, R.

    2018-02-01

    One technique for ensuring continuity of wireless communication services and keeping a smooth transition on mobile communication networks is the soft handover technique. In the Soft Handover (SHO) technique the inclusion and reduction of Base Station from the set of active sets is determined by initiation triggers. One of the initiation triggers is based on the strong reception signal. In this paper we observed the influence of parameters of large-scale radio propagation models to improve the performance of mobile communications. The observation parameters for characterizing the performance of the specified mobile system are Drop Call, Radio Link Degradation Rate and Average Size of Active Set (AS). The simulated results show that the increase in altitude of Base Station (BS) Antenna and Mobile Station (MS) Antenna contributes to the improvement of signal power reception level so as to improve Radio Link quality and increase the average size of Active Set and reduce the average Drop Call rate. It was also found that Hata’s propagation model contributed significantly to improvements in system performance parameters compared to Okumura’s propagation model and Lee’s propagation model.

  11. Fate modelling of chemical compounds with incomplete data sets

    DEFF Research Database (Denmark)

    Birkved, Morten; Heijungs, Reinout

    2011-01-01

    , and to provide simplified proxies for the more complicated “real”model relationships. In the presented study two approaches for the reduction of the data demand associated with characterization of chemical emissions in USEtoxTM are tested: The first approach yields a simplified set of mode of entry specific meta......Impact assessment of chemical compounds in Life Cycle Impact Assessment (LCIA) and Environmental Risk Assessment (ERA) requires a vast amount of data on the properties of the chemical compounds being assessed. These data are used in multi-media fate and exposure models, to calculate risk levels...... in an approximate way. The idea is that not all data needed in a multi-media fate and exposure model are completely independent and equally important, but that there are physical-chemical and biological relationships between sets of chemical properties. A statistical model is constructed to underpin this assumption...

  12. Time domain series system definition and gear set reliability modeling

    International Nuclear Information System (INIS)

    Xie, Liyang; Wu, Ningxiang; Qian, Wenxue

    2016-01-01

    Time-dependent multi-configuration is a typical feature for mechanical systems such as gear trains and chain drives. As a series system, a gear train is distinct from a traditional series system, such as a chain, in load transmission path, system-component relationship, system functioning manner, as well as time-dependent system configuration. Firstly, the present paper defines time-domain series system to which the traditional series system reliability model is not adequate. Then, system specific reliability modeling technique is proposed for gear sets, including component (tooth) and subsystem (tooth-pair) load history description, material priori/posterior strength expression, time-dependent and system specific load-strength interference analysis, as well as statistically dependent failure events treatment. Consequently, several system reliability models are developed for gear sets with different tooth numbers in the scenario of tooth root material ultimate tensile strength failure. The application of the models is discussed in the last part, and the differences between the system specific reliability model and the traditional series system reliability model are illustrated by virtue of several numerical examples. - Highlights: • A new type of series system, i.e. time-domain multi-configuration series system is defined, that is of great significance to reliability modeling. • Multi-level statistical analysis based reliability modeling method is presented for gear transmission system. • Several system specific reliability models are established for gear set reliability estimation. • The differences between the traditional series system reliability model and the new model are illustrated.

  13. Modeling phosphorus in the Lake Allatoona watershed using SWAT: I. Developing phosphorus parameter values.

    Science.gov (United States)

    Radcliffe, D E; Lin, Z; Risse, L M; Romeis, J J; Jackson, C R

    2009-01-01

    Lake Allatoona is a large reservoir north of Atlanta, GA, that drains an area of about 2870 km2 scheduled for a phosphorus (P) total maximum daily load (TMDL). The Soil and Water Assessment Tool (SWAT) model has been widely used for watershed-scale modeling of P, but there is little guidance on how to estimate P-related parameters, especially those related to in-stream P processes. In this paper, methods are demonstrated to individually estimate SWAT soil-related P parameters and to collectively estimate P parameters related to stream processes. Stream related parameters were obtained using the nutrient uptake length concept. In a manner similar to experiments conducted by stream ecologists, a small point source is simulated in a headwater sub-basin of the SWAT models, then the in-stream parameter values are adjusted collectively to get an uptake length of P similar to the values measured in the streams in the region. After adjusting the in-stream parameters, the P uptake length estimated in the simulations ranged from 53 to 149 km compared to uptake lengths measured by ecologists in the region of 11 to 85 km. Once the a priori P-related parameter set was developed, the SWAT models of main tributaries to Lake Allatoona were calibrated for daily transport. Models using SWAT P parameters derived from the methods in this paper outperformed models using default parameter values when predicting total P (TP) concentrations in streams during storm events and TP annual loads to Lake Allatoona.

  14. Characterizations of identified sets delivered by structural econometric models

    OpenAIRE

    Chesher, Andrew; Rosen, Adam M.

    2016-01-01

    This paper develops characterizations of identified sets of structures and structural features for complete and incomplete models involving continuous and/or discrete variables. Multiple values of unobserved variables can be associated with particular combinations of observed variables. This can arise when there are multiple sources of heterogeneity, censored or discrete endogenous variables, or inequality restrictions on functions of observed and unobserved variables. The models generalize t...

  15. Investigation of RADTRAN Stop Model input parameters for truck stops

    International Nuclear Information System (INIS)

    Griego, N.R.; Smith, J.D.; Neuhauser, K.S.

    1996-01-01

    RADTRAN is a computer code for estimating the risks and consequences as transport of radioactive materials (RAM). RADTRAN was developed and is maintained by Sandia National Laboratories for the US Department of Energy (DOE). For incident-free transportation, the dose to persons exposed while the shipment is stopped is frequently a major percentage of the overall dose. This dose is referred to as Stop Dose and is calculated by the Stop Model. Because stop dose is a significant portion of the overall dose associated with RAM transport, the values used as input for the Stop Model are important. Therefore, an investigation of typical values for RADTRAN Stop Parameters for truck stops was performed. The resulting data from these investigations were analyzed to provide mean values, standard deviations, and histograms. Hence, the mean values can be used when an analyst does not have a basis for selecting other input values for the Stop Model. In addition, the histograms and their characteristics can be used to guide statistical sampling techniques to measure sensitivity of the RADTRAN calculated Stop Dose to the uncertainties in the stop model input parameters. This paper discusses the details and presents the results of the investigation of stop model input parameters at truck stops

  16. Four-parameter analytical local model potential for atoms

    International Nuclear Information System (INIS)

    Fei, Yu; Jiu-Xun, Sun; Rong-Gang, Tian; Wei, Yang

    2009-01-01

    Analytical local model potential for modeling the interaction in an atom reduces the computational effort in electronic structure calculations significantly. A new four-parameter analytical local model potential is proposed for atoms Li through Lr, and the values of four parameters are shell-independent and obtained by fitting the results of X a method. At the same time, the energy eigenvalues, the radial wave functions and the total energies of electrons are obtained by solving the radial Schrödinger equation with a new form of potential function by Numerov's numerical method. The results show that our new form of potential function is suitable for high, medium and low Z atoms. A comparison among the new potential function and other analytical potential functions shows the greater flexibility and greater accuracy of the present new potential function. (atomic and molecular physics)

  17. Systematic search for wide periodic windows and bounds for the set of regular parameters for the quadratic map.

    Science.gov (United States)

    Galias, Zbigniew

    2017-05-01

    An efficient method to find positions of periodic windows for the quadratic map f(x)=ax(1-x) and a heuristic algorithm to locate the majority of wide periodic windows are proposed. Accurate rigorous bounds of positions of all periodic windows with periods below 37 and the majority of wide periodic windows with longer periods are found. Based on these results, we prove that the measure of the set of regular parameters in the interval [3,4] is above 0.613960137. The properties of periodic windows are studied numerically. The results of the analysis are used to estimate that the true value of the measure of the set of regular parameters is close to 0.6139603.

  18. Improving the transferability of hydrological model parameters under changing conditions

    Science.gov (United States)

    Huang, Yingchun; Bárdossy, András

    2014-05-01

    Hydrological models are widely utilized to describe catchment behaviors with observed hydro-meteorological data. Hydrological process may be considered as non-stationary under the changing climate and land use conditions. An applicable hydrological model should be able to capture the essential features of the target catchment and therefore be transferable to different conditions. At present, many model applications based on the stationary assumptions are not sufficient for predicting further changes or time variability. The aim of this study is to explore new model calibration methods in order to improve the transferability of model parameters. To cope with the instability of model parameters calibrated on catchments in non-stationary conditions, we investigate the idea of simultaneously calibration on streamflow records for the period with dissimilar climate characteristics. In additional, a weather based weighting function is implemented to adjust the calibration period to future trends. For regions with limited data and ungauged basins, the common calibration was applied by using information from similar catchments. Result shows the model performance and transfer quantity could be well improved via common calibration. This model calibration approach will be used to enhance regional water management and flood forecasting capabilities.

  19. A Uniform Set of DAV Atmospheric Parameters to Enable Differential Seismology

    Science.gov (United States)

    Fuchs, Joshua T.; Dunlap, Bart H.; Clemens, J. Christopher; Meza, Jesus; Dennihy, Erik

    2017-01-01

    We have observed over 130 hydrogen-atmosphere pulsating white dwarfs (DAVs) using the Goodman Spectrograph on the SOAR Telescope. This includes all known DAVs south of +10° declination as well as those observed by the K2 mission. Because it employs a single instrument, our sample allows us to carefully explore systematics in the determination of atmospheric parameters, Teff and log(g). While some systematics show changes of up to 300 K in Teff and 0.06 in log(g), the relative position of each star in the Teff-log(g) plane is more secure. These relative positions, combined with differences in pulsation spectra, will allow us to investigate relative differences in the structure and composition of over 130 DAVs through differential seismology.

  20. A simplified method to assess structurally identifiable parameters in Monod-based activated sludge models.

    Science.gov (United States)

    Petersen, Britta; Gernaey, Krist; Devisscher, Martijn; Dochain, Denis; Vanrolleghem, Peter A

    2003-07-01

    The first step in the estimation of parameters of models applied for data interpretation should always be an investigation of the identifiability of the model parameters. In this study the structural identifiability of the model parameters of Monod-based activated sludge models (ASM) was studied. In an illustrative example it was assumed that respirometric (dissolved oxygen or oxygen uptake rates) and titrimetric (cumulative proton production) measurements were available for the characterisation of nitrification. Two model structures, including the presence and absence of significant growth for description of long- and short-term experiments, respectively, were considered. The structural identifiability was studied via the series expansion methods. It was proven that the autotrophic yield becomes uniquely identifiable when combined respirometric and titrimetric data are assumed for the characterisation of nitrification. The most remarkable result of the study was, however, that the identifiability results could be generalised by applying a set of ASM1 matrix based generalisation rules. It appeared that the identifiable parameter combinations could be predicted directly based on the knowledge of the process model under study (in ASM1-like matrix representation), the measured variables and the biodegradable substrate considered. This generalisation reduces the time-consuming task of deriving the structurally identifiable model parameters significantly and helps the user to obtain these directly without the necessity to go too deeply into the mathematical background of structural identifiability.

  1. A new method to estimate parameters of linear compartmental models using artificial neural networks

    International Nuclear Information System (INIS)

    Gambhir, Sanjiv S.; Keppenne, Christian L.; Phelps, Michael E.; Banerjee, Pranab K.

    1998-01-01

    At present, the preferred tool for parameter estimation in compartmental analysis is an iterative procedure; weighted nonlinear regression. For a large number of applications, observed data can be fitted to sums of exponentials whose parameters are directly related to the rate constants/coefficients of the compartmental models. Since weighted nonlinear regression often has to be repeated for many different data sets, the process of fitting data from compartmental systems can be very time consuming. Furthermore the minimization routine often converges to a local (as opposed to global) minimum. In this paper, we examine the possibility of using artificial neural networks instead of weighted nonlinear regression in order to estimate model parameters. We train simple feed-forward neural networks to produce as outputs the parameter values of a given model when kinetic data are fed to the networks' input layer. The artificial neural networks produce unbiased estimates and are orders of magnitude faster than regression algorithms. At noise levels typical of many real applications, the neural networks are found to produce lower variance estimates than weighted nonlinear regression in the estimation of parameters from mono- and biexponential models. These results are primarily due to the inability of weighted nonlinear regression to converge. These results establish that artificial neural networks are powerful tools for estimating parameters for simple compartmental models. (author)

  2. Assessing composition and structure of soft biphasic media from Kelvin-Voigt fractional derivative model parameters.

    Science.gov (United States)

    Zhang, Hong Mei; Wang, Yue; Fatemi, Mostafa; Insana, Michael F

    2017-03-01

    Kelvin-Voigt fractional derivative (KVFD) model parameters have been used to describe viscoelastic properties of soft tissues. However, translating model parameters into a concise set of intrinsic mechanical properties related to tissue composition and structure remains challenging. This paper begins by exploring these relationships using a biphasic emulsion materials with known composition. Mechanical properties are measured by analyzing data from two indentation techniques - ramp-stress relaxation and load-unload hysteresis tests. Material composition is predictably correlated with viscoelastic model parameters. Model parameters estimated from the tests reveal that elastic modulus E 0 closely approximates the shear modulus for pure gelatin. Fractional-order parameter α and time constant τ vary monotonically with the volume fraction of the material's fluid component. α characterizes medium fluidity and the rate of energy dissipation, and τ is a viscous time constant. Numerical simulations suggest that the viscous coefficient η is proportional to the energy lost during quasi-static force-displacement cycles, E A . The slope of E A versus η is determined by α and the applied indentation ramp time T r . Experimental measurements from phantom and ex vivo liver data show close agreement with theoretical predictions of the η - E A relation. The relative error is less than 20% for emulsions 22% for liver. We find that KVFD model parameters form a concise features space for biphasic medium characterization that described time-varying mechanical properties.

  3. The influence of phylodynamic model specifications on parameter estimates of the Zika virus epidemic.

    Science.gov (United States)

    Boskova, Veronika; Stadler, Tanja; Magnus, Carsten

    2018-01-01

    Each new virus introduced into the human population could potentially spread and cause a worldwide epidemic. Thus, early quantification of epidemic spread is crucial. Real-time sequencing followed by Bayesian phylodynamic analysis has proven to be extremely informative in this respect. Bayesian phylodynamic analyses require a model to be chosen and prior distributions on model parameters to be specified. We study here how choices regarding the tree prior influence quantification of epidemic spread in an emerging epidemic by focusing on estimates of the parameters clock rate, tree height, and reproductive number in the currently ongoing Zika virus epidemic in the Americas. While parameter estimates are quite robust to reasonable variations in the model settings when studying the complete data set, it is impossible to obtain unequivocal estimates when reducing the data to local Zika epidemics in Brazil and Florida, USA. Beyond the empirical insights, this study highlights the conceptual differences between the so-called birth-death and coalescent tree priors: while sequence sampling times alone can strongly inform the tree height and reproductive number under a birth-death model, the coalescent tree height prior is typically only slightly influenced by this information. Such conceptual differences together with non-trivial interactions of different priors complicate proper interpretation of empirical results. Overall, our findings indicate that phylodynamic analyses of early viral spread data must be carried out with care as data sets may not necessarily be informative enough yet to provide estimates robust to prior settings. It is necessary to do a robustness check of these data sets by scanning several models and prior distributions. Only if the posterior distributions are robust to reasonable changes of the prior distribution, the parameter estimates can be trusted. Such robustness tests will help making real-time phylodynamic analyses of spreading epidemic more

  4. Gas ultracentrifuge separative parameters modeling using hybrid neural networks

    International Nuclear Information System (INIS)

    Crus, Maria Ursulina de Lima

    2005-01-01

    A hybrid neural network is developed for the calculation of the separative performance of an ultracentrifuge. A feed forward neural network is trained to estimate the internal flow parameters of a gas ultracentrifuge, and then these parameters are applied in the diffusion equation. For this study, a 573 experimental data set is used to establish the relation between the separative performance and the controlled variables. The process control variables considered are: the feed flow rate F, the cut θ and the product pressure Pp. The mechanical arrangements consider the radial waste scoop dimension, the rotating baffle size D s and the axial feed location Z E . The methodology was validated through the comparison of the calculated separative performance with experimental values. This methodology may be applied to other processes, just by adapting the phenomenological procedures. (author)

  5. A fuzzy set preference model for market share analysis

    Science.gov (United States)

    Turksen, I. B.; Willson, Ian A.

    1992-01-01

    Consumer preference models are widely used in new product design, marketing management, pricing, and market segmentation. The success of new products depends on accurate market share prediction and design decisions based on consumer preferences. The vague linguistic nature of consumer preferences and product attributes, combined with the substantial differences between individuals, creates a formidable challenge to marketing models. The most widely used methodology is conjoint analysis. Conjoint models, as currently implemented, represent linguistic preferences as ratio or interval-scaled numbers, use only numeric product attributes, and require aggregation of individuals for estimation purposes. It is not surprising that these models are costly to implement, are inflexible, and have a predictive validity that is not substantially better than chance. This affects the accuracy of market share estimates. A fuzzy set preference model can easily represent linguistic variables either in consumer preferences or product attributes with minimal measurement requirements (ordinal scales), while still estimating overall preferences suitable for market share prediction. This approach results in flexible individual-level conjoint models which can provide more accurate market share estimates from a smaller number of more meaningful consumer ratings. Fuzzy sets can be incorporated within existing preference model structures, such as a linear combination, using the techniques developed for conjoint analysis and market share estimation. The purpose of this article is to develop and fully test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation), and how much to make (market share

  6. Robust linear parameter varying induction motor control with polytopic models

    Directory of Open Access Journals (Sweden)

    Dalila Khamari

    2013-01-01

    Full Text Available This paper deals with a robust controller for an induction motor which is represented as a linear parameter varying systems. To do so linear matrix inequality (LMI based approach and robust Lyapunov feedback controller are associated. This new approach is related to the fact that the synthesis of a linear parameter varying (LPV feedback controller for the inner loop take into account rotor resistance and mechanical speed as varying parameter. An LPV flux observer is also synthesized to estimate rotor flux providing reference to cited above regulator. The induction motor is described as a polytopic model because of speed and rotor resistance affine dependence their values can be estimated on line during systems operations. The simulation results are presented to confirm the effectiveness of the proposed approach where robustness stability and high performances have been achieved over the entire operating range of the induction motor.

  7. Anatomical parameters for musculoskeletal modeling of the hand and wrist

    NARCIS (Netherlands)

    Mirakhorlo, M. (Mojtaba); Visser, Judith M A; Goislard de Monsabert, B. A A X; van der Helm, F.C.T.; Maas, H.; Veeger, H. E J

    2016-01-01

    A musculoskeletal model of the hand and wrist can provide valuable biomechanical and neurophysiological insights, relevant for clinicians and ergonomists. Currently, no consistent data-set exists comprising the full anatomy of these upper extremity parts. The aim of this study was to collect a

  8. An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection

    Science.gov (United States)

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This

  9. An improved swarm optimization for parameter estimation and biological model selection.

    Directory of Open Access Journals (Sweden)

    Afnizanfaizal Abdullah

    Full Text Available One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete

  10. Biosphere modelling for a HLW repository - scenario and parameter variations

    International Nuclear Information System (INIS)

    Grogan, H.

    1985-03-01

    In Switzerland high-level radioactive wastes have been considered for disposal in deep-lying crystalline formations. The individual doses to man resulting from radionuclides entering the biosphere via groundwater transport are calculated. The main recipient area modelled, which constitutes the base case, is a broad gravel terrace sited along the south bank of the river Rhine. An alternative recipient region, a small valley with a well, is also modelled. A number of parameter variations are performed in order to ascertain their impact on the doses. Finally two scenario changes are modelled somewhat simplistically, these consider different prevailing climates, namely tundra and a warmer climate than present. In the base case negligibly low doses to man in the long term, resulting from the existence of a HLW repository have been calculated. Cs-135 results in the largest dose (8.4E-7 mrem/y at 6.1E+6 y) while Np-237 gives the largest dose from the actinides (3.6E-8 mrem/y). The response of the model to parameter variations cannot be easily predicted due to non-linear coupling of many of the parameters. However, the calculated doses were negligibly low in all cases as were those resulting from the two scenario variations. (author)

  11. Thermal Model Parameter Identification of a Lithium Battery

    Directory of Open Access Journals (Sweden)

    Dirk Nissing

    2017-01-01

    Full Text Available The temperature of a Lithium battery cell is important for its performance, efficiency, safety, and capacity and is influenced by the environmental temperature and by the charging and discharging process itself. Battery Management Systems (BMS take into account this effect. As the temperature at the battery cell is difficult to measure, often the temperature is measured on or nearby the poles of the cell, although the accuracy of predicting the cell temperature with those quantities is limited. Therefore a thermal model of the battery is used in order to calculate and estimate the cell temperature. This paper uses a simple RC-network representation for the thermal model and shows how the thermal parameters are identified using input/output measurements only, where the load current of the battery represents the input while the temperatures at the poles represent the outputs of the measurement. With a single measurement the eight model parameters (thermal resistances, electric contact resistances, and heat capacities can be determined using the method of least-square. Experimental results show that the simple model with the identified parameters fits very accurately to the measurements.

  12. Assessment of parameter regionalization methods for modeling flash floods in China

    Science.gov (United States)

    Ragettli, Silvan; Zhou, Jian; Wang, Haijing

    2017-04-01

    Rainstorm flash floods are a common and serious phenomenon during the summer months in many hilly and mountainous regions of China. For this study, we develop a modeling strategy for simulating flood events in small river basins of four Chinese provinces (Shanxi, Henan, Beijing, Fujian). The presented research is part of preliminary investigations for the development of a national operational model for predicting and forecasting hydrological extremes in basins of size 10 - 2000 km2, whereas most of these basins are ungauged or poorly gauged. The project is supported by the China Institute of Water Resources and Hydropower Research within the framework of the national initiative for flood prediction and early warning system for mountainous regions in China (research project SHZH-IWHR-73). We use the USGS Precipitation-Runoff Modeling System (PRMS) as implemented in the Java modeling framework Object Modeling System (OMS). PRMS can operate at both daily and storm timescales, switching between the two using a precipitation threshold. This functionality allows the model to perform continuous simulations over several years and to switch to the storm mode to simulate storm response in greater detail. The model was set up for fifteen watersheds for which hourly precipitation and runoff data were available. First, automatic calibration based on the Shuffled Complex Evolution method was applied to different hydrological response unit (HRU) configurations. The Nash-Sutcliffe efficiency (NSE) was used as assessment criteria, whereas only runoff data from storm events were considered. HRU configurations reflect the drainage-basin characteristics and depend on assumptions regarding drainage density and minimum HRU size. We then assessed the sensitivity of optimal parameters to different HRU configurations. Finally, the transferability to other watersheds of optimal model parameters that were not sensitive to HRU configurations was evaluated. Model calibration for the 15

  13. Contaminant transport in aquifers: improving the determination of model parameters

    International Nuclear Information System (INIS)

    Sabino, C.V.S.; Moreira, R.M.; Lula, Z.L.; Chausson, Y.; Magalhaes, W.F.; Vianna, M.N.

    1998-01-01

    Parameters conditioning the migration behavior of cesium and mercury are measured with their tracers 137 Cs and 203 Hg in the laboratory, using both batch and column experiments. Batch tests were used to define the sorption isotherm characteristics. Also investigated were the influences of some test parameters, in particular those due to the volume of water to mass of soil ratio (V/m). A provisional relationship between V/m and the distribution coefficient, K d , has been advanced, and a procedure to estimate K d 's valid for environmental values of the ratio V/m has been suggested. Column tests provided the parameters for a transport model. A major problem to be dealt with in such tests is the collimation of the radioactivity probe. Besides mechanically optimizing the collimator, a deconvolution procedure has been suggested and tested, with statistical criteria, to filter off both noise and spurious tracer signals. Correction procedures for the integrating effect introduced by sampling at the exit of columns have also been developed. These techniques may be helpful in increasing the accuracy required in the measurement of parameters conditioning contaminant migration in soils, thus allowing more reliable predictions based on mathematical model applications. (author)

  14. Data Set for Emperical Validation of Double Skin Facade Model

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Jensen, Rasmus Lund; Heiselberg, Per

    2008-01-01

    During the recent years the attention to the double skin facade (DSF) concept has greatly increased. Nevertheless, the application of the concept depends on whether a reliable model for simulation of the DSF performance will be developed or pointed out. This is, however, not possible to do, until...... the International Energy Agency (IEA) Task 34 Annex 43. This paper describes the full-scale outdoor experimental test facility ‘the Cube', where the experiments were conducted, the experimental set-up and the measurements procedure for the data sets. The empirical data is composed for the key-functioning modes...

  15. HOM study and parameter calculation of the TESLA cavity model

    CERN Document Server

    Zeng, Ri-Hua; Gerigk Frank; Wang Guang-Wei; Wegner Rolf; Liu Rong; Schuh Marcel

    2010-01-01

    The Superconducting Proton Linac (SPL) is the project for a superconducting, high current H-accelerator at CERN. To find dangerous higher order modes (HOMs) in the SPL superconducting cavities, simulation and analysis for the cavity model using simulation tools are necessary. The. existing TESLA 9-cell cavity geometry data have been used for the initial construction of the models in HFSS. Monopole, dipole and quadrupole modes have been obtained by applying different symmetry boundaries on various cavity models. In calculation, scripting language in HFSS was used to create scripts to automatically calculate the parameters of modes in these cavity models (these scripts are also available in other cavities with different cell numbers and geometric structures). The results calculated automatically are then compared with the values given in the TESLA paper. The optimized cavity model with the minimum error will be taken as the base for further simulation of the SPL cavities.

  16. Optimization and model reduction in the high dimensional parameter space of a budding yeast cell cycle model

    Science.gov (United States)

    2013-01-01

    Background Parameter estimation from experimental data is critical for mathematical modeling of protein regulatory networks. For realistic networks with dozens of species and reactions, parameter estimation is an especially challenging task. In this study, we present an approach for parameter estimation that is effective in fitting a model of the budding yeast cell cycle (comprising 26 nonlinear ordinary differential equations containing 126 rate constants) to the experimentally observed phenotypes (viable or inviable) of 119 genetic strains carrying mutations of cell cycle genes. Results Starting from an initial guess of the parameter values, which correctly captures the phenotypes of only 72 genetic strains, our parameter estimation algorithm quickly improves the success rate of the model to 105–111 of the 119 strains. This success rate is comparable to the best values achieved by a skilled modeler manually choosing parameters over many weeks. The algorithm combines two search and optimization strategies. First, we use Latin hypercube sampling to explore a region surrounding the initial guess. From these samples, we choose ∼20 different sets of parameter values that correctly capture wild type viability. These sets form the starting generation of differential evolution that selects new parameter values that perform better in terms of their success rate in capturing phenotypes. In addition to producing highly successful combinations of parameter values, we analyze the results to determine the parameters that are most critical for matching experimental outcomes and the most competitive strains whose correct outcome with a given parameter vector forces numerous other strains to have incorrect outcomes. These “most critical parameters” and “most competitive strains” provide biological insights into the model. Conversely, the “least critical parameters” and “least competitive strains” suggest ways to reduce the computational complexity of the

  17. An Iterative Optimization Algorithm for Lens Distortion Correction Using Two-Parameter Models

    Directory of Open Access Journals (Sweden)

    Daniel Santana-Cedrés

    2016-12-01

    Full Text Available We present a method for the automatic estimation of two-parameter radial distortion models, considering polynomial as well as division models. The method first detects the longest distorted lines within the image by applying the Hough transform enriched with a radial distortion parameter. From these lines, the first distortion parameter is estimated, then we initialize the second distortion parameter to zero and the two-parameter model is embedded into an iterative nonlinear optimization process to improve the estimation. This optimization aims at reducing the distance from the edge points to the lines, adjusting two distortion parameters as well as the coordinates of the center of distortion. Furthermore, this allows detecting more points belonging to the distorted lines, so that the Hough transform is iteratively repeated to extract a better set of lines until no improvement is achieved. We present some experiments on real images with significant distortion to show the ability of the proposed approach to automatically correct this type of distortion as well as a comparison between the polynomial and division models.

  18. Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework.

    Science.gov (United States)

    El-Assady, Mennatallah; Sevastjanova, Rita; Sperrle, Fabian; Keim, Daniel; Collins, Christopher

    2018-01-01

    Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.

  19. The definition of input parameters for modelling of energetic subsystems

    Directory of Open Access Journals (Sweden)

    Ptacek M.

    2013-06-01

    Full Text Available This paper is a short review and a basic description of mathematical models of renewable energy sources which present individual investigated subsystems of a system created in Matlab/Simulink. It solves the physical and mathematical relationships of photovoltaic and wind energy sources that are often connected to the distribution networks. The fuel cell technology is much less connected to the distribution networks but it could be promising in the near future. Therefore, the paper informs about a new dynamic model of the low-temperature fuel cell subsystem, and the main input parameters are defined as well. Finally, the main evaluated and achieved graphic results for the suggested parameters and for all the individual subsystems mentioned above are shown.

  20. The definition of input parameters for modelling of energetic subsystems

    Science.gov (United States)

    Ptacek, M.

    2013-06-01

    This paper is a short review and a basic description of mathematical models of renewable energy sources which present individual investigated subsystems of a system created in Matlab/Simulink. It solves the physical and mathematical relationships of photovoltaic and wind energy sources that are often connected to the distribution networks. The fuel cell technology is much less connected to the distribution networks but it could be promising in the near future. Therefore, the paper informs about a new dynamic model of the low-temperature fuel cell subsystem, and the main input parameters are defined as well. Finally, the main evaluated and achieved graphic results for the suggested parameters and for all the individual subsystems mentioned above are shown.

  1. Propagation channel characterization, parameter estimation, and modeling for wireless communications

    CERN Document Server

    Yin, Xuefeng

    2016-01-01

    Thoroughly covering channel characteristics and parameters, this book provides the knowledge needed to design various wireless systems, such as cellular communication systems, RFID and ad hoc wireless communication systems. It gives a detailed introduction to aspects of channels before presenting the novel estimation and modelling techniques which can be used to achieve accurate models. To systematically guide readers through the topic, the book is organised in three distinct parts. The first part covers the fundamentals of the characterization of propagation channels, including the conventional single-input single-output (SISO) propagation channel characterization as well as its extension to multiple-input multiple-output (MIMO) cases. Part two focuses on channel measurements and channel data post-processing. Wideband channel measurements are introduced, including the equipment, technology and advantages and disadvantages of different data acquisition schemes. The channel parameter estimation methods are ...

  2. Empirical flow parameters : a tool for hydraulic model validity

    Science.gov (United States)

    Asquith, William H.; Burley, Thomas E.; Cleveland, Theodore G.

    2013-01-01

    The objectives of this project were (1) To determine and present from existing data in Texas, relations between observed stream flow, topographic slope, mean section velocity, and other hydraulic factors, to produce charts such as Figure 1 and to produce empirical distributions of the various flow parameters to provide a methodology to "check if model results are way off!"; (2) To produce a statistical regional tool to estimate mean velocity or other selected parameters for storm flows or other conditional discharges at ungauged locations (most bridge crossings) in Texas to provide a secondary way to compare such values to a conventional hydraulic modeling approach. (3.) To present ancillary values such as Froude number, stream power, Rosgen channel classification, sinuosity, and other selected characteristics (readily determinable from existing data) to provide additional information to engineers concerned with the hydraulic-soil-foundation component of transportation infrastructure.

  3. Lumped-parameter Model of a Bucket Foundation

    DEFF Research Database (Denmark)

    Andersen, Lars; Ibsen, Lars Bo; Liingaard, Morten

    2009-01-01

    As an alternative to gravity footings or pile foundations, offshore wind turbines at shallow water can be placed on a bucket foundation. The present analysis concerns the development of consistent lumped-parameter models for this type of foundation. The aim is to formulate a computationally effic...... be disregarded without significant loss of accuracy. Finally, special attention is drawn to the influence of the skirt stiffness, i.e. whether the embedded part of the caisson is rigid or flexible....

  4. Modeling Water Quality Parameters Using Data-driven Methods

    Directory of Open Access Journals (Sweden)

    Shima Soleimani

    2017-02-01

    Full Text Available Introduction: Surface water bodies are the most easily available water resources. Increase use and waste water withdrawal of surface water causes drastic changes in surface water quality. Water quality, importance as the most vulnerable and important water supply resources is absolutely clear. Unfortunately, in the recent years because of city population increase, economical improvement, and industrial product increase, entry of pollutants to water bodies has been increased. According to that water quality parameters express physical, chemical, and biological water features. So the importance of water quality monitoring is necessary more than before. Each of various uses of water, such as agriculture, drinking, industry, and aquaculture needs the water with a special quality. In the other hand, the exact estimation of concentration of water quality parameter is significant. Material and Methods: In this research, first two input variable models as selection methods (namely, correlation coefficient and principal component analysis were applied to select the model inputs. Data processing is consisting of three steps, (1 data considering, (2 identification of input data which have efficient on output data, and (3 selecting the training and testing data. Genetic Algorithm-Least Square Support Vector Regression (GA-LSSVR algorithm were developed to model the water quality parameters. In the LSSVR method is assumed that the relationship between input and output variables is nonlinear, but by using a nonlinear mapping relation can create a space which is named feature space in which relationship between input and output variables is defined linear. The developed algorithm is able to gain maximize the accuracy of the LSSVR method with auto LSSVR parameters. Genetic algorithm (GA is one of evolutionary algorithm which automatically can find the optimum coefficient of Least Square Support Vector Regression (LSSVR. The GA-LSSVR algorithm was employed to

  5. A procedure for determining parameters of a simplified ligament model.

    Science.gov (United States)

    Barrett, Jeff M; Callaghan, Jack P

    2018-01-03

    A previous mathematical model of ligament force-generation treated their behavior as a population of collagen fibres arranged in parallel. When damage was ignored in this model, an expression for ligament force in terms of the deflection, x, effective stiffness, k, mean collagen slack length, μ, and the standard deviation of slack lengths, σ, was obtained. We present a simple three-step method for determining the three model parameters (k, μ, and σ) from force-deflection data: (1) determine the equation of the line in the linear region of this curve, its slope is k and its x -intercept is -μ; (2) interpolate the force-deflection data when x is -μ to obtain F 0 ; (3) calculate σ with the equation σ=2πF 0 /k. Results from this method were in good agreement to those obtained from a least-squares procedure on experimental data - all falling within 6%. Therefore, parameters obtained using the proposed method provide a systematic way of reporting ligament parameters, or for obtaining an initial guess for nonlinear least-squares. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Modelling spatial-temporal and coordinative parameters in swimming.

    Science.gov (United States)

    Seifert, L; Chollet, D

    2009-07-01

    This study modelled the changes in spatial-temporal and coordinative parameters through race paces in the four swimming strokes. The arm and leg phases in simultaneous strokes (butterfly and breaststroke) and the inter-arm phases in alternating strokes (crawl and backstroke) were identified by video analysis to calculate the time gaps between propulsive phases. The relationships among velocity, stroke rate, stroke length and coordination were modelled by polynomial regression. Twelve elite male swimmers swam at four race paces. Quadratic regression modelled the changes in spatial-temporal and coordinative parameters with velocity increases for all four strokes. First, the quadratic regression between coordination and velocity showed changes common to all four strokes. Notably, the time gaps between the key points defining the beginning and end of the stroke phases decreased with increases in velocity, which led to decreases in glide times and increases in the continuity between propulsive phases. Conjointly, the quadratic regression among stroke rate, stroke length and velocity was similar to the changes in coordination, suggesting that these parameters may influence coordination. The main practical application for coaches and scientists is that ineffective time gaps can be distinguished from those that simply reflect an individual swimmer's profile by monitoring the glide times within a stroke cycle. In the case of ineffective time gaps, targeted training could improve the swimmer's management of glide time.

  7. Analysis of regional rainfall-runoff parameters for the Lake Michigan Diversion hydrological modeling

    Science.gov (United States)

    Soong, David T.; Over, Thomas M.

    2015-01-01

    The Lake Michigan Diversion Accounting (LMDA) system has been developed by the U.S. Army Corps of Engineers, Chicago District (USACE-Chicago) and the State of Illinois as a part of the interstate Great Lakes water regulatory program. The diverted Lake Michigan watershed is a 673-square-mile watershed that is comprised of the Chicago River and Calumet River watersheds. They originally drained into Lake Michigan, but now flow to the Mississippi River watershed via three canals constructed in the Chicago area in the early twentieth century. Approximately 393 square miles of the diverted watershed is ungaged, and the runoff from the ungaged portion of the diverted watershed has been estimated by the USACE-Chicago using the Hydrological Simulation Program-FORTRAN (HSPF) program. The accuracy of simulated runoff depends on the accuracy of the parameter set used in the HSPF program. Nine parameter sets comprised of the North Branch, Little Calumet, Des Plaines, Hickory Creek, CSSC, NIPC, 1999, CTE, and 2008 have been developed at different time periods and used by the USACE-Chicago. In this study, the U.S. Geological Survey and the USACE-Chicago collaboratively analyzed the parameter sets using nine gaged watersheds in or adjacent to the diverted watershed to assess the predictive accuracies of selected parameter sets. Six of the parameter sets, comprising North Branch, Hickory Creek, NIPC, 1999, CTE, and 2008, were applied to the nine gaged watersheds for evaluating their simulation accuracy from water years 1996 to 2011. The nine gaged watersheds were modeled by using the three LMDA land-cover types (grass, forest, and hydraulically connected imperviousness) based on the 2006 National Land Cover Database, and the latest meteorological and precipitation data consistent with the current (2014) LMDA modeling framework.

  8. The Impact of Three Factors on the Recovery of Item Parameters for the Three-Parameter Logistic Model

    Science.gov (United States)

    Kim, Kyung Yong; Lee, Won-Chan

    2017-01-01

    This article provides a detailed description of three factors (specification of the ability distribution, numerical integration, and frame of reference for the item parameter estimates) that might affect the item parameter estimation of the three-parameter logistic model, and compares five item calibration methods, which are combinations of the…

  9. Robust Approximation to Adaptive Control by Use of Representative Parameter Sets with Particular Reference to Type 1 Diabetes

    Directory of Open Access Journals (Sweden)

    Anthony Shannon

    2006-04-01

    Full Text Available This paper describes an approach to adaptive optimal control in the presence of model parameter calculation difficulties. This has wide application in a variety of biological and biomedical research and clinical problems. To illustrate the techniques, the approach is applied to the development and implementation of a practical adaptive insulin infusion algorithm for use with patients with Type 1 diabetes mellitus.

  10. Local sensitivity analysis of a distributed parameters water quality model

    International Nuclear Information System (INIS)

    Pastres, R.; Franco, D.; Pecenik, G.; Solidoro, C.; Dejak, C.

    1997-01-01

    A local sensitivity analysis is presented of a 1D water-quality reaction-diffusion model. The model describes the seasonal evolution of one of the deepest channels of the lagoon of Venice, that is affected by nutrient loads from the industrial area and heat emission from a power plant. Its state variables are: water temperature, concentrations of reduced and oxidized nitrogen, Reactive Phosphorous (RP), phytoplankton, and zooplankton densities, Dissolved Oxygen (DO) and Biological Oxygen Demand (BOD). Attention has been focused on the identifiability and the ranking of the parameters related to primary production in different mixing conditions

  11. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-06

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  12. Input parameters for LEAP and analysis of the Model 22C data base

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, L.; Goldstein, M.

    1981-05-01

    The input data for the Long-Term Energy Analysis Program (LEAP) employed by EIA for projections of long-term energy supply and demand in the US were studied and additional documentation provided. Particular emphasis has been placed on the LEAP Model 22C input data base, which was used in obtaining the output projections which appear in the 1978 Annual Report to Congress. Definitions, units, associated model parameters, and translation equations are given in detail. Many parameters were set to null values in Model 22C so as to turn off certain complexities in LEAP; these parameters are listed in Appendix B along with parameters having constant values across all activities. The values of the parameters for each activity are tabulated along with the source upon which each parameter is based - and appropriate comments provided, where available. The structure of the data base is briefly outlined and an attempt made to categorize the parameters according to the methods employed for estimating the numerical values. Due to incomplete documentation and/or lack of specific parameter definitions, few of the input values could be traced and uniquely interpreted using the information provided in the primary and secondary sources. Input parameter choices were noted which led to output projections which are somewhat suspect. Other data problems encountered are summarized. Some of the input data were corrected and a revised base case was constructed. The output projections for this revised case are compared with the Model 22C output for the year 2020, for the Transportation Sector. LEAP could be a very useful tool, especially so in the study of emerging technologies over long-time frames.

  13. Interactive Modelling of Shapes Using the Level-Set Method

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas; Christensen, Niels Jørgen

    2002-01-01

    In this paper, we propose a technique for intuitive, interactive modelling of {3D} shapes. The technique is based on the Level-Set Method which has the virtue of easily handling changes to the topology of the represented solid. Furthermore, this method also leads to sculpting operations that are ......In this paper, we propose a technique for intuitive, interactive modelling of {3D} shapes. The technique is based on the Level-Set Method which has the virtue of easily handling changes to the topology of the represented solid. Furthermore, this method also leads to sculpting operations...... which are suitable for shape modelling are proposed. However, normally these would result in tools that would a ect the entire model. To facilitate local changes to the model, we introduce a windowing scheme which constrains the {LSM} to a ect only a small part of the model. The {LSM} based sculpting...... tools have been incorporated in our sculpting system which also includes facilities for volumetric {CSG} and several techniques for visualization....

  14. Validation of the GROMOS force-field parameter set 45A3 against nuclear magnetic resonance data of hen egg lysozyme

    International Nuclear Information System (INIS)

    Soares, T. A.; Daura, X.; Oostenbrink, C.; Smith, L. J.; Gunsteren, W. F. van

    2004-01-01

    The quality of molecular dynamics (MD) simulations of proteins depends critically on the biomolecular force field that is used. Such force fields are defined by force-field parameter sets, which are generally determined and improved through calibration of properties of small molecules against experimental or theoretical data. By application to large molecules such as proteins, a new force-field parameter set can be validated. We report two 3.5 ns molecular dynamics simulations of hen egg white lysozyme in water applying the widely used GROMOS force-field parameter set 43A1 and a new set 45A3. The two MD ensembles are evaluated against NMR spectroscopic data NOE atom-atom distance bounds, 3 J NHα and 3 J αβ coupling constants, and 1 5N relaxation data. It is shown that the two sets reproduce structural properties about equally well. The 45A3 ensemble fulfills the atom-atom distance bounds derived from NMR spectroscopy slightly less well than the 43A1 ensemble, with most of the NOE distance violations in both ensembles involving residues located in loops or flexible regions of the protein. Convergence patterns are very similar in both simulations atom-positional root-mean-square differences (RMSD) with respect to the X-ray and NMR model structures and NOE inter-proton distances converge within 1.0-1.5 ns while backbone 3 J HNα -coupling constants and 1 H- 1 5N order parameters take slightly longer, 1.0-2.0 ns. As expected, side-chain 3 J αβ -coupling constants and 1 H- 1 5N order parameters do not reach full convergence for all residues in the time period simulated. This is particularly noticeable for side chains which display rare structural transitions. When comparing each simulation trajectory with an older and a newer set of experimental NOE data on lysozyme, it is found that the newer, larger, set of experimental data agrees as well with each of the simulations. In other words, the experimental data converged towards the theoretical result

  15. Finding the effective parameter perturbations in atmospheric models: the LORENZ63 model as case study

    NARCIS (Netherlands)

    Moolenaar, H.E.; Selten, F.M.

    2004-01-01

    Climate models contain numerous parameters for which the numeric values are uncertain. In the context of climate simulation and prediction, a relevant question is what range of climate outcomes is possible given the range of parameter uncertainties. Which parameter perturbation changes the climate

  16. Comparison of parameter estimation algorithms in hydrological modelling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2006-01-01

    Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well...... for these types of models, although at a more expensive computational cost. The main purpose of this study is to investigate the performance of a global and a local parameter optimization algorithm, respectively, the Shuffled Complex Evolution (SCE) algorithm and the gradient-based Gauss......-Marquardt-Levenberg algorithm (implemented in the PEST software), when applied to a steady-state and a transient groundwater model. The results show that PEST can have severe problems in locating the global optimum and in being trapped in local regions of attractions. The global SCE procedure is, in general, more effective...

  17. Setting development goals using stochastic dynamical system models.

    Science.gov (United States)

    Ranganathan, Shyam; Nicolis, Stamatios C; Bali Swain, Ranjula; Sumpter, David J T

    2017-01-01

    The Millennium Development Goals (MDG) programme was an ambitious attempt to encourage a globalised solution to important but often-overlooked development problems. The programme led to wide-ranging development but it has also been criticised for unrealistic and arbitrary targets. In this paper, we show how country-specific development targets can be set using stochastic, dynamical system models built from historical data. In particular, we show that the MDG target of two-thirds reduction of child mortality from 1990 levels was infeasible for most countries, especially in sub-Saharan Africa. At the same time, the MDG targets were not ambitious enough for fast-developing countries such as Brazil and China. We suggest that model-based setting of country-specific targets is essential for the success of global development programmes such as the Sustainable Development Goals (SDG). This approach should provide clear, quantifiable targets for policymakers.

  18. Parameter sensitivity analysis for activated sludge models No. 1 and 3 combined with one-dimensional settling model.

    Science.gov (United States)

    Kim, J R; Ko, J H; Lee, J J; Kim, S H; Park, T J; Kim, C W; Woo, H J

    2006-01-01

    The aim of this study was to suggest a sensitivity analysis technique that can reliably predict effluent quality and minimize calibration efforts without being seriously affected by influent composition and parameter uncertainty in the activated sludge models No. 1 (ASM1) and No. 3 (ASM3) with a settling model. The parameter sensitivities for ASM1 and ASM3 were analyzed by three techniques such as SVM-Slope, RVM-SlopeMA, and RVM-AreaCRF. The settling model parameters were also considered. The selected highly sensitive parameters were estimated with a genetic algorithm, and the simulation results were compared as deltaEQ. For ASM1, the SVM-Slope technique proved to be an acceptable approach because it identified consistent sensitive parameter sets and presented smaller deltaEQ under every tested condition. For ASM3, no technique identified consistently sensitive parameters under different conditions. This phenomenon was regarded as the reflection of the high sensitivity of the ASM3 parameters. But it should be noted that the SVM-Slope technique presented reliable deltaEQ under every influent condition. Moreover, it was the simplest and easiest methodology for coding and quantification among those tested. Therefore, it was concluded that the SVM-Slope technique could be a reasonable approach for both ASM1 and ASM3.

  19. Accounting for baryonic effects in cosmic shear tomography: Determining a minimal set of nuisance parameters using PCA

    Energy Technology Data Exchange (ETDEWEB)

    Eifler, Tim; Krause, Elisabeth; Dodelson, Scott; Zentner, Andrew; Hearin, Andrew; Gnedin, Nickolay

    2014-05-28

    Systematic uncertainties that have been subdominant in past large-scale structure (LSS) surveys are likely to exceed statistical uncertainties of current and future LSS data sets, potentially limiting the extraction of cosmological information. Here we present a general framework (PCA marginalization) to consistently incorporate systematic effects into a likelihood analysis. This technique naturally accounts for degeneracies between nuisance parameters and can substantially reduce the dimension of the parameter space that needs to be sampled. As a practical application, we apply PCA marginalization to account for baryonic physics as an uncertainty in cosmic shear tomography. Specifically, we use CosmoLike to run simulated likelihood analyses on three independent sets of numerical simulations, each covering a wide range of baryonic scenarios differing in cooling, star formation, and feedback mechanisms. We simulate a Stage III (Dark Energy Survey) and Stage IV (Large Synoptic Survey Telescope/Euclid) survey and find a substantial bias in cosmological constraints if baryonic physics is not accounted for. We then show that PCA marginalization (employing at most 3 to 4 nuisance parameters) removes this bias. Our study demonstrates that it is possible to obtain robust, precise constraints on the dark energy equation of state even in the presence of large levels of systematic uncertainty in astrophysical processes. We conclude that the PCA marginalization technique is a powerful, general tool for addressing many of the challenges facing the precision cosmology program.

  20. Identification of Arbitrary Zonation in Groundwater Parameters using the Level Set Method and a Parallel Genetic Algorithm

    Science.gov (United States)

    Lei, H.; Lu, Z.; Vesselinov, V. V.; Ye, M.

    2017-12-01

    Simultaneous identification of both the zonation structure of aquifer heterogeneity and the hydrogeological parameters associated with these zones is challenging, especially for complex subsurface heterogeneity fields. In this study, a new approach, based on the combination of the level set method and a parallel genetic algorithm is proposed. Starting with an initial guess for the zonation field (including both zonation structure and the hydraulic properties of each zone), the level set method ensures that material interfaces are evolved through the inverse process such that the total residual between the simulated and observed state variables (hydraulic head) always decreases, which means that the inversion result depends on the initial guess field and the minimization process might fail if it encounters a local minimum. To find the global minimum, the genetic algorithm (GA) is utilized to explore the parameters that define initial guess fields, and the minimal total residual corresponding to each initial guess field is considered as the fitness function value in the GA. Due to the expensive evaluation of the fitness function, a parallel GA is adapted in combination with a simulated annealing algorithm. The new approach has been applied to several synthetic cases in both steady-state and transient flow fields, including a case with real flow conditions at the chromium contaminant site at the Los Alamos National Laboratory. The results show that this approach is capable of identifying the arbitrary zonation structures of aquifer heterogeneity and the hydrogeological parameters associated with these zones effectively.

  1. Leather for motorcyclist garments: Multi-test based material model fitting in terms of Ogden parameters

    Directory of Open Access Journals (Sweden)

    Bońkowski T.

    2017-12-01

    Full Text Available This paper is focused on experimental testing and modeling of genuine leather used for a motorcycle personal protective equipment. Simulations of powered two wheelers (PTW accidents are usually performed using human body models (HBM for the injury assessment equipped only with the helmet model. However, the kinematics of the PTW rider during a real accident is disturbed by the stiffness of his suit, which is normally not taken into account during the reconstruction or simulation of the accident scenario. The material model proposed in this paper can be used in numerical simulations of crash scenarios that include the effect of motorcyclist rider garment. The fitting procedure was conducted on 2 sets of samples: 5 uniaxial samples and 5 biaxial samples. The experimental characteristics were used to obtain the set of 25 constitutive material models in terms of Ogden parameters.

  2. Sample Size Requirements for Estimation of Item Parameters in the Multidimensional Graded Response Model

    Directory of Open Access Journals (Sweden)

    Shengyu eJiang

    2016-02-01

    Full Text Available Likert types of rating scales in which a respondent chooses a response from an ordered set of response options are used to measure a wide variety of psychological, educational, and medical outcome variables. The most appropriate item response theory model for analyzing and scoring these instruments when they provide scores on multiple scales is the multidimensional graded response model (MGRM. A simulation study was conducted to investigate the variables that might affect item parameter recovery for the MGRM. Data were generated based on different sample sizes, test lengths, and scale intercorrelations. Parameter estimates were obtained through the flexiMIRT software. The quality of parameter recovery was assessed by the correlation between true and estimated parameters as well as bias and root- mean-square-error. Results indicated that for the vast majority of cases studied a sample size of N = 500 provided accurate parameter estimates, except for tests with 240 items when 1,000 examinees were necessary to obtain accurate parameter estimates. Increasing sample size beyond N = 1,000 did not increase the accuracy of MGRM parameter estimates.

  3. Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models

    Science.gov (United States)

    Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea

    2014-05-01

    Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.

  4. Recommended Parameter Values for GENII Modeling of Radionuclides in Routine Air and Water Releases

    Energy Technology Data Exchange (ETDEWEB)

    Snyder, Sandra F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Arimescu, Carmen [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Napier, Bruce A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hay, Tristan R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2012-11-01

    The GENII v2 code is used to estimate dose to individuals or populations from the release of radioactive materials into air or water. Numerous parameter values are required for input into this code. User-defined parameters cover the spectrum from chemical data, meteorological data, agricultural data, and behavioral data. This document is a summary of parameter values that reflect conditions in the United States. Reasonable regional and age-dependent data is summarized. Data availability and quality varies. The set of parameters described address scenarios for chronic air emissions or chronic releases to public waterways. Considerations for the special tritium and carbon-14 models are briefly addressed. GENIIv2.10.0 is the current software version that this document supports.

  5. A modified Leslie-Gower predator-prey interaction model and parameter identifiability

    Science.gov (United States)

    Tripathi, Jai Prakash; Meghwani, Suraj S.; Thakur, Manoj; Abbas, Syed

    2018-01-01

    In this work, bifurcation and a systematic approach for estimation of identifiable parameters of a modified Leslie-Gower predator-prey system with Crowley-Martin functional response and prey refuge is discussed. Global asymptotic stability is discussed by applying fluctuation lemma. The system undergoes into Hopf bifurcation with respect to parameters intrinsic growth rate of predators (s) and prey reserve (m). The stability of Hopf bifurcation is also discussed by calculating Lyapunov number. The sensitivity analysis of the considered model system with respect to all variables is performed which also supports our theoretical study. To estimate the unknown parameter from the data, an optimization procedure (pseudo-random search algorithm) is adopted. System responses and phase plots for estimated parameters are also compared with true noise free data. It is found that the system dynamics with true set of parametric values is similar to the estimated parametric values. Numerical simulations are presented to substantiate the analytical findings.

  6. A Comparison of the One-and Three-Parameter Logistic Models on Measures of Test Efficiency.

    Science.gov (United States)

    Benson, Jeri

    Two methods of item selection were used to select sets of 40 items from a 50-item verbal analogies test, and the resulting item sets were compared for relative efficiency. The BICAL program was used to select the 40 items having the best mean square fit to the one parameter logistic (Rasch) model. The LOGIST program was used to select the 40 items…

  7. Methods of mathematical modeling using polynomials of algebra of sets

    Science.gov (United States)

    Kazanskiy, Alexandr; Kochetkov, Ivan

    2018-03-01

    The article deals with the construction of discrete mathematical models for solving applied problems arising from the operation of building structures. Security issues in modern high-rise buildings are extremely serious and relevant, and there is no doubt that interest in them will only increase. The territory of the building is divided into zones for which it is necessary to observe. Zones can overlap and have different priorities. Such situations can be described using formulas algebra of sets. Formulas can be programmed, which makes it possible to work with them using computer models.

  8. Data Set for Emperical Validation of Double Skin Facade Model

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Jensen, Rasmus Lund; Heiselberg, Per

    2008-01-01

    During the recent years the attention to the double skin facade (DSF) concept has greatly increased. Nevertheless, the application of the concept depends on whether a reliable model for simulation of the DSF performance will be developed or pointed out. This is, however, not possible to do, until...... the model is empirically validated and its' limitations for the DSF modeling are identified. Correspondingly, the existence and availability of the experimental data is very essential. Three sets of accurate empirical data for validation of DSF modeling with building simulation software were produced within...... of a double skin facade: 1. External air curtain mode, it is the naturally ventilated DSF cavity with the top and bottom openings open to the outdoor; 2. Thermal insulation mode, when all of the DSF openings closed; 3. Preheating mode, with the bottom DSF openings open to the outdoor and top openings open...

  9. A review of distributed parameter groundwater management modeling methods

    Science.gov (United States)

    Gorelick, Steven M.

    1983-01-01

    Models which solve the governing groundwater flow or solute transport equations in conjunction with optimization techniques, such as linear and quadratic programing, are powerful aquifer management tools. Groundwater management models fall in two general categories: hydraulics or policy evaluation and water allocation. Groundwater hydraulic management models enable the determination of optimal locations and pumping rates of numerous wells under a variety of restrictions placed upon local drawdown, hydraulic gradients, and water production targets. Groundwater policy evaluation and allocation models can be used to study the influence upon regional groundwater use of institutional policies such as taxes and quotas. Furthermore, fairly complex groundwater-surface water allocation problems can be handled using system decomposition and multilevel optimization. Experience from the few real world applications of groundwater optimization-management techniques is summarized. Classified separately are methods for groundwater quality management aimed at optimal waste disposal in the subsurface. This classification is composed of steady state and transient management models that determine disposal patterns in such a way that water quality is protected at supply locations. Classes of research missing from the literature are groundwater quality management models involving nonlinear constraints, models which join groundwater hydraulic and quality simulations with political-economic management considerations, and management models that include parameter uncertainty.

  10. Some notes on unobserved parameters (frailties) in reliability modeling

    International Nuclear Information System (INIS)

    Cha, Ji Hwan; Finkelstein, Maxim

    2014-01-01

    Unobserved random quantities (frailties) often appear in various reliability problems especially when dealing with the failure rates of items from heterogeneous populations. As the failure rate is a conditional characteristic, the distributions of these random quantities, similar to Bayesian approaches, are updated in accordance with the corresponding survival information. At some instances, apart from a statistical meaning, frailties can have also useful interpretations describing the underlying lifetime model. We discuss and clarify these issues in reliability context and present and analyze several meaningful examples. We consider the proportional hazards model with a random factor; the stress–strength model, where the unobserved strength of a system can be viewed as frailty; a parallel system with a random number of components and, finally, the first passage time problem for the Wiener process with random parameters. - Highlights: • We discuss and clarify the notion of frailty in reliability context and present and analyze several meaningful examples. • The paper provides a new insight and general perspective on reliability models with unobserved parameters. • The main message of the paper is well illustrated by several meaningful examples and emphasized by detailed discussion

  11. Optimization-Based Inverse Identification of the Parameters of a Concrete Cap Material Model

    Science.gov (United States)

    Král, Petr; Hokeš, Filip; Hušek, Martin; Kala, Jiří; Hradil, Petr

    2017-10-01

    , gradient based and nature inspired optimization algorithms and experimental data, the latter of which take the form of a load-extension curve obtained from the evaluation of uniaxial tensile test results. The aim of this research was to obtain material model parameters corresponding to the quasi-static tensile loading which may be further used for the research involving dynamic and high-speed tensile loading. Based on the obtained results it can be concluded that the set goal has been reached.

  12. Hydrological Modelling and Parameter Identification for Green Roof

    Science.gov (United States)

    Lo, W.; Tung, C.

    2012-12-01

    Green roofs, a multilayered system covered by plants, can be used to replace traditional concrete roofs as one of various measures to mitigate the increasing stormwater runoff in the urban environment. Moreover, facing the high uncertainty of the climate change, the present engineering method as adaptation may be regarded as improper measurements; reversely, green roofs are unregretful and flexible, and thus are rather important and suitable. The related technology has been developed for several years and the researches evaluating the stormwater reduction performance of green roofs are ongoing prosperously. Many European counties, cities in the U.S., and other local governments incorporate green roof into the stormwater control policy. Therefore, in terms of stormwater management, it is necessary to develop a robust hydrologic model to quantify the efficacy of green roofs over different types of designs and environmental conditions. In this research, a physical based hydrologic model is proposed to simulate water flowing process in the green roof system. In particular, the model adopts the concept of water balance, bringing a relatively simple and intuitive idea. Also, the research compares the two methods in the surface water balance calculation. One is based on Green-Ampt equation, and the other is under the SCS curve number calculation. A green roof experiment is designed to collect weather data and water discharge. Then, the proposed model is verified with these observed data; furthermore, the parameters using in the model are calibrated to find appropriate values in the green roof hydrologic simulation. This research proposes a simple physical based hydrologic model and the measures to determine parameters for the model.

  13. Modelling pesticide leaching under climate change: parameter vs. climate input uncertainty

    Directory of Open Access Journals (Sweden)

    K. Steffens

    2014-02-01

    Full Text Available Assessing climate change impacts on pesticide leaching requires careful consideration of different sources of uncertainty. We investigated the uncertainty related to climate scenario input and its importance relative to parameter uncertainty of the pesticide leaching model. The pesticide fate model MACRO was calibrated against a comprehensive one-year field data set for a well-structured clay soil in south-western Sweden. We obtained an ensemble of 56 acceptable parameter sets that represented the parameter uncertainty. Nine different climate model projections of the regional climate model RCA3 were available as driven by different combinations of global climate models (GCM, greenhouse gas emission scenarios and initial states of the GCM. The future time series of weather data used to drive the MACRO model were generated by scaling a reference climate data set (1970–1999 for an important agricultural production area in south-western Sweden based on monthly change factors for 2070–2099. 30 yr simulations were performed for different combinations of pesticide properties and application seasons. Our analysis showed that both the magnitude and the direction of predicted change in pesticide leaching from present to future depended strongly on the particular climate scenario. The effect of parameter uncertainty was of major importance for simulating absolute pesticide losses, whereas the climate uncertainty was relatively more important for predictions of changes of pesticide losses from present to future. The climate uncertainty should be accounted for by applying an ensemble of different climate scenarios. The aggregated ensemble prediction based on both acceptable parameterizations and different climate scenarios has the potential to provide robust probabilistic estimates of future pesticide losses.

  14. Effect of software version and parameter settings on the marginal and internal adaptation of crowns fabricated with the CAD/CAM system

    Directory of Open Access Journals (Sweden)

    Ji Suk SHIM

    2015-10-01

    Full Text Available Objective This study investigated the marginal and internal adaptation of individual dental crowns fabricated using a CAD/CAM system (Sirona’s BlueCam, also evaluating the effect of the software version used, and the specific parameter settings in the adaptation of crowns.Material and Methods Forty digital impressions of a master model previously prepared were acquired using an intraoral scanner and divided into four groups based on the software version and on the spacer settings used. The versions 3.8 and 4.2 of the software were used, and the spacer parameter was set at either 40 μm or 80 μm. The marginal and internal fit of the crowns were measured using the replica technique, which uses a low viscosity silicone material that simulates the thickness of the cement layer. The data were analyzed using a Friedman two-way analysis of variance (ANOVA and paired t-tests with significance level set at p<0.05.Results The two-way ANOVA analysis showed the software version (p<0.05 and the spacer parameter (p<0.05 significantly affected the crown adaptation. The crowns designed with the version 4.2 of the software showed a better fit than those designed with the version 3.8, particularly in the axial wall and in the inner margin. The spacer parameter was more accurately represented in the version 4.2 of the software than in the version 3.8. In addition, the use of the version 4.2 of the software combined with the spacer parameter set at 80 μm showed the least variation. On the other hand, the outer margin was not affected by the variables.Conclusion Compared to the version 3.8 of the software, the version 4.2 can be recommended for the fabrication of well-fitting crown restorations, and for the appropriate regulation of the spacer parameter.

  15. Modelling Technical and Economic Parameters in Selection of Manufacturing Devices

    Directory of Open Access Journals (Sweden)

    Naqib Daneshjo

    2017-11-01

    Full Text Available Sustainable science and technology development is also conditioned by continuous development of means of production which have a key role in structure of each production system. Mechanical nature of the means of production is complemented by controlling and electronic devices in context of intelligent industry. A selection of production machines for a technological process or technological project has so far been practically resolved, often only intuitively. With regard to increasing intelligence, the number of variable parameters that have to be considered when choosing a production device is also increasing. It is necessary to use computing techniques and decision making methods according to heuristic methods and more precise methodological procedures during the selection. The authors present an innovative model for optimization of technical and economic parameters in the selection of manufacturing devices for industry 4.0.

  16. Local and global inverse modelling strategies to estimate parameters for pesticide leaching from lysimeter studies.

    Science.gov (United States)

    Kahl, Gunnar M; Sidorenko, Yury; Gottesbüren, Bernhard

    2015-04-01

    As an option for higher-tier leaching assessment of pesticides in Europe according to FOCUS, pesticide properties can be estimated from lysimeter studies by inversely fitting parameter values (substance half-life DT50 and sorption coefficient to organic matter kom ). The aim of the study was to identify adequate methods for inverse modelling. Model parameters for the PEARL (Pesticide Emission Assessment at Regional and Local scales) model were estimated with different inverse optimisation algorithms - the Levenberg-Marquardt (LM), PD_MS2 (PEST Driver Multiple Starting Points 2) and SCEM (Shuffled Complex Evolution Metropolis) algorithms. Optimisation of crop factors and hydraulic properties was found to be an ill-posed problem, and all algorithms failed to identify reliable global minima for the hydrological parameters. All algorithms performed equally well in estimating pesticide sorption and degradation parameters. SCEM was in most cases the only algorithm that reliably calculated uncertainties. The most reliable approach for finding the best parameter set in the stepwise approach of optimising evapotranspiration, soil hydrology and pesticide parameters was to run only SCEM or a combined approach with more than one algorithm using the best fit of each step for further processing. PD_MS2 was well suited to a quick parameter search. The linear parameter uncertainty intervals estimated by LM and PD_MS2 were usually larger than by the non-linear method used by SCEM. With the suggested methods, parameter optimisation, together with reliable estimation of uncertainties, is possible also for relatively complex systems. © 2014 Society of Chemical Industry.

  17. Studies on Effect of Fused Deposition Modelling Process Parameters on Ultimate Tensile Strength and Dimensional Accuracy of Nylon

    Science.gov (United States)

    Basavaraj, C. K.; Vishwas, M.

    2016-09-01

    This paper discusses the process parameters for fused deposition modelling (FDM). Layer thickness, Orientation angle and shell thickness are the process variables considered for studies. Ultimate tensile strength, dimensional accuracy and manufacturing time are the response parameters. For number of experimental runs the taguchi's L9 orthogonal array is used. Taguchis S/N ratio was used to identify a set of process parameters which give good results for respective response characteristics. Effectiveness of each parameter is investigated by using analysis of variance. The material used for the studies of process parameter is Nylon.

  18. Dynamic systems models new methods of parameter and state estimation

    CERN Document Server

    2016-01-01

    This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...

  19. Parameter Estimation for a Class of Lifetime Models

    Directory of Open Access Journals (Sweden)

    Xinyang Ji

    2014-01-01

    Full Text Available Our purpose in this paper is to present a better method of parametric estimation for a bivariate nonlinear regression model, which takes the performance indicator of rubber aging as the dependent variable and time and temperature as the independent variables. We point out that the commonly used two-step method (TSM, which splits the model and estimate parameters separately, has limitation. Instead, we apply the Marquardt’s method (MM to implement parametric estimation directly for the model and compare these two methods of parametric estimation by random simulation. Our results show that MM has better effect of data fitting, more reasonable parametric estimates, and smaller prediction error compared with TSM.

  20. The parameter space of Cubic Galileon models for cosmic acceleration

    CERN Document Server

    Bellini, Emilio

    2013-01-01

    We use recent measurements of the expansion history of the universe to place constraints on the parameter space of cubic Galileon models. This gives strong constraints on the Lagrangian of these models. Most dynamical terms in the Galileon Lagrangian are constraint to be small and the acceleration is effectively provided by a constant term in the scalar potential, thus reducing, effectively, to a LCDM model for current acceleration. The effective equation of state is indistinguishable from that of a cosmological constant w = -1 and the data constraint it to have no temporal variations of more than at the few % level. The energy density of the Galileon can contribute only to about 10% of the acceleration energy density, being the other 90% a cosmological constant term. This demonstrates how useful direct measurements of the expansion history of the universe are at constraining the dynamical nature of dark energy.

  1. Analysis of Model Parameters for a Polymer Filtration Simulator

    Directory of Open Access Journals (Sweden)

    N. Brackett-Rozinsky

    2011-01-01

    Full Text Available We examine a simulation model for polymer extrusion filters and determine its sensitivity to filter parameters. The simulator is a three-dimensional, time-dependent discretization of a coupled system of nonlinear partial differential equations used to model fluid flow and debris transport, along with statistical relationships that define debris distributions and retention probabilities. The flow of polymer fluid, and suspended debris particles, is tracked to determine how well a filter performs and how long it operates before clogging. A filter may have multiple layers, characterized by thickness, porosity, and average pore diameter. In this work, the thickness of each layer is fixed, while the porosities and pore diameters vary for a two-layer and three-layer study. The effects of porosity and average pore diameter on the measures of filter quality are calculated. For the three layer model, these effects are tested for statistical significance using analysis of variance. Furthermore, the effects of each pair of interacting parameters are considered. This allows the detection of complexity, where in changing two aspects of a filter together may generate results substantially different from what occurs when those same aspects change separately. The principal findings indicate that the first layer of a filter is the most important.

  2. Optimization of Experimental Model Parameter Identification for Energy Storage Systems

    Directory of Open Access Journals (Sweden)

    Rosario Morello

    2013-09-01

    Full Text Available The smart grid approach is envisioned to take advantage of all available modern technologies in transforming the current power system to provide benefits to all stakeholders in the fields of efficient energy utilisation and of wide integration of renewable sources. Energy storage systems could help to solve some issues that stem from renewable energy usage in terms of stabilizing the intermittent energy production, power quality and power peak mitigation. With the integration of energy storage systems into the smart grids, their accurate modeling becomes a necessity, in order to gain robust real-time control on the network, in terms of stability and energy supply forecasting. In this framework, this paper proposes a procedure to identify the values of the battery model parameters in order to best fit experimental data and integrate it, along with models of energy sources and electrical loads, in a complete framework which represents a real time smart grid management system. The proposed method is based on a hybrid optimisation technique, which makes combined use of a stochastic and a deterministic algorithm, with low computational burden and can therefore be repeated over time in order to account for parameter variations due to the battery’s age and usage.

  3. Applying Atmospheric Measurements to Constrain Parameters of Terrestrial Source Models

    Science.gov (United States)

    Hyer, E. J.; Kasischke, E. S.; Allen, D. J.

    2004-12-01

    Quantitative inversions of atmospheric measurements have been widely applied to constrain atmospheric budgets of a range of trace gases. Experiments of this type have revealed persistent discrepancies between 'bottom-up' and 'top-down' estimates of source magnitudes. The most common atmospheric inversion uses the absolute magnitude as the sole parameter for each source, and returns the optimal value of that parameter. In order for atmospheric measurements to be useful for improving 'bottom-up' models of terrestrial sources, information about other properties of the sources must be extracted. As the density and quality of atmospheric trace gas measurements improve, examination of higher-order properties of trace gas sources should become possible. Our model of boreal forest fire emissions is parameterized to permit flexible examination of the key uncertainties in this source. Using output from this model together with the UM CTM, we examined the sensitivity of CO concentration measurements made by the MOPITT instrument to various uncertainties in the boreal source: geographic distribution of burned area, fire type (crown fires vs. surface fires), and fuel consumption in above-ground and ground-layer fuels. Our results indicate that carefully designed inversion experiments have the potential to help constrain not only the absolute magnitudes of terrestrial sources, but also the key uncertainties associated with 'bottom-up' estimates of those sources.

  4. Bayesian parameter estimation for stochastic models of biological cell migration

    Science.gov (United States)

    Dieterich, Peter; Preuss, Roland

    2013-08-01

    Cell migration plays an essential role under many physiological and patho-physiological conditions. It is of major importance during embryonic development and wound healing. In contrast, it also generates negative effects during inflammation processes, the transmigration of tumors or the formation of metastases. Thus, a reliable quantification and characterization of cell paths could give insight into the dynamics of these processes. Typically stochastic models are applied where parameters are extracted by fitting models to the so-called mean square displacement of the observed cell group. We show that this approach has several disadvantages and problems. Therefore, we propose a simple procedure directly relying on the positions of the cell's trajectory and the covariance matrix of the positions. It is shown that the covariance is identical with the spatial aging correlation function for the supposed linear Gaussian models of Brownian motion with drift and fractional Brownian motion. The technique is applied and illustrated with simulated data showing a reliable parameter estimation from single cell paths.

  5. Index-based groundwater vulnerability mapping models using hydrogeological settings: A critical evaluation

    International Nuclear Information System (INIS)

    Kumar, Prashant; Bansod, Baban K.S.; Debnath, Sanjit K.; Thakur, Praveen Kumar; Ghanshyam, C.

    2015-01-01

    Groundwater vulnerability maps are useful for decision making in land use planning and water resource management. This paper reviews the various groundwater vulnerability assessment models developed across the world. Each model has been evaluated in terms of its pros and cons and the environmental conditions of its application. The paper further discusses the validation techniques used for the generated vulnerability maps by various models. Implicit challenges associated with the development of the groundwater vulnerability assessment models have also been identified with scientific considerations to the parameter relations and their selections. - Highlights: • Various index-based groundwater vulnerability assessment models have been discussed. • A comparative analysis of the models and its applicability in different hydrogeological settings has been discussed. • Research problems of underlying vulnerability assessment models are also reported in this review paper

  6. Application of a free parameter model to plastic scintillation samples

    Energy Technology Data Exchange (ETDEWEB)

    Tarancon Sanz, Alex, E-mail: alex.tarancon@ub.edu [Departament de Quimica Analitica, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain); Kossert, Karsten, E-mail: Karsten.Kossert@ptb.de [Physikalisch-Technische Bundesanstalt (PTB), Bundesallee 100, 38116 Braunschweig (Germany)

    2011-08-21

    In liquid scintillation (LS) counting, the CIEMAT/NIST efficiency tracing method and the triple-to-double coincidence ratio (TDCR) method have proved their worth for reliable activity measurements of a number of radionuclides. In this paper, an extended approach to apply a free-parameter model to samples containing a mixture of solid plastic scintillation microspheres and radioactive aqueous solutions is presented. Several beta-emitting radionuclides were measured in a TDCR system at PTB. For the application of the free parameter model, the energy loss in the aqueous phase must be taken into account, since this portion of the particle energy does not contribute to the creation of scintillation light. The energy deposit in the aqueous phase is determined by means of Monte Carlo calculations applying the PENELOPE software package. To this end, great efforts were made to model the geometry of the samples. Finally, a new geometry parameter was defined, which was determined by means of a tracer radionuclide with known activity. This makes the analysis of experimental TDCR data of other radionuclides possible. The deviations between the determined activity concentrations and reference values were found to be lower than 3%. The outcome of this research work is also important for a better understanding of liquid scintillation counting. In particular the influence of (inverse) micelles, i.e. the aqueous spaces embedded in the organic scintillation cocktail, can be investigated. The new approach makes clear that it is important to take the energy loss in the aqueous phase into account. In particular for radionuclides emitting low-energy electrons (e.g. M-Auger electrons from {sup 125}I), this effect can be very important.

  7. Microbial Communities Model Parameter Calculation for TSPA/SR

    Energy Technology Data Exchange (ETDEWEB)

    D. Jolley

    2001-07-16

    This calculation has several purposes. First the calculation reduces the information contained in ''Committed Materials in Repository Drifts'' (BSC 2001a) to useable parameters required as input to MING V1.O (CRWMS M&O 1998, CSCI 30018 V1.O) for calculation of the effects of potential in-drift microbial communities as part of the microbial communities model. The calculation is intended to replace the parameters found in Attachment II of the current In-Drift Microbial Communities Model revision (CRWMS M&O 2000c) with the exception of Section 11-5.3. Second, this calculation provides the information necessary to supercede the following DTN: M09909SPAMING1.003 and replace it with a new qualified dataset (see Table 6.2-1). The purpose of this calculation is to create the revised qualified parameter input for MING that will allow {Delta}G (Gibbs Free Energy) to be corrected for long-term changes to the temperature of the near-field environment. Calculated herein are the quadratic or second order regression relationships that are used in the energy limiting calculations to potential growth of microbial communities in the in-drift geochemical environment. Third, the calculation performs an impact review of a new DTN: M00012MAJIONIS.000 that is intended to replace the currently cited DTN: GS9809083 12322.008 for water chemistry data used in the current ''In-Drift Microbial Communities Model'' revision (CRWMS M&O 2000c). Finally, the calculation updates the material lifetimes reported on Table 32 in section 6.5.2.3 of the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000c) based on the inputs reported in BSC (2001a). Changes include adding new specified materials and updating old materials information that has changed.

  8. Microbial Communities Model Parameter Calculation for TSPA/SR

    International Nuclear Information System (INIS)

    D. Jolley

    2001-01-01

    This calculation has several purposes. First the calculation reduces the information contained in ''Committed Materials in Repository Drifts'' (BSC 2001a) to useable parameters required as input to MING V1.O (CRWMS M and O 1998, CSCI 30018 V1.O) for calculation of the effects of potential in-drift microbial communities as part of the microbial communities model. The calculation is intended to replace the parameters found in Attachment II of the current In-Drift Microbial Communities Model revision (CRWMS M and O 2000c) with the exception of Section 11-5.3. Second, this calculation provides the information necessary to supercede the following DTN: M09909SPAMING1.003 and replace it with a new qualified dataset (see Table 6.2-1). The purpose of this calculation is to create the revised qualified parameter input for MING that will allow ΔG (Gibbs Free Energy) to be corrected for long-term changes to the temperature of the near-field environment. Calculated herein are the quadratic or second order regression relationships that are used in the energy limiting calculations to potential growth of microbial communities in the in-drift geochemical environment. Third, the calculation performs an impact review of a new DTN: M00012MAJIONIS.000 that is intended to replace the currently cited DTN: GS9809083 12322.008 for water chemistry data used in the current ''In-Drift Microbial Communities Model'' revision (CRWMS M and O 2000c). Finally, the calculation updates the material lifetimes reported on Table 32 in section 6.5.2.3 of the ''In-Drift Microbial Communities'' AMR (CRWMS M and O 2000c) based on the inputs reported in BSC (2001a). Changes include adding new specified materials and updating old materials information that has changed

  9. Model-based setting of inspiratory pressure and respiratory rate in pressure-controlled ventilation

    International Nuclear Information System (INIS)

    Schranz, C; Möller, K; Becher, T; Schädler, D; Weiler, N

    2014-01-01

    Mechanical ventilation carries the risk of ventilator-induced-lung-injury (VILI). To minimize the risk of VILI, ventilator settings should be adapted to the individual patient properties. Mathematical models of respiratory mechanics are able to capture the individual physiological condition and can be used to derive personalized ventilator settings. This paper presents model-based calculations of inspiration pressure (p I ), inspiration and expiration time (t I , t E ) in pressure-controlled ventilation (PCV) and a retrospective evaluation of its results in a group of mechanically ventilated patients. Incorporating the identified first order model of respiratory mechanics in the basic equation of alveolar ventilation yielded a nonlinear relation between ventilation parameters during PCV. Given this patient-specific relation, optimized settings in terms of minimal p I and adequate t E can be obtained. We then retrospectively analyzed data from 16 ICU patients with mixed pathologies, whose ventilation had been previously optimized by ICU physicians with the goal of minimization of inspiration pressure, and compared the algorithm's ‘optimized’ settings to the settings that had been chosen by the physicians. The presented algorithm visualizes the patient-specific relations between inspiration pressure and inspiration time. The algorithm's calculated results highly correlate to the physician's ventilation settings with r = 0.975 for the inspiration pressure, and r = 0.902 for the inspiration time. The nonlinear patient-specific relations of ventilation parameters become transparent and support the determination of individualized ventilator settings according to therapeutic goals. Thus, the algorithm is feasible for a variety of ventilated ICU patients and has the potential of improving lung-protective ventilation by minimizing inspiratory pressures and by helping to avoid the build-up of clinically significant intrinsic positive end

  10. Pelvic tilt and truncal inclination: two key radiographic parameters in the setting of adults with spinal deformity.

    Science.gov (United States)

    Lafage, Virginie; Schwab, Frank; Patel, Ashish; Hawkinson, Nicola; Farcy, Jean-Pierre

    2009-08-01

    Prospective radiographic and clinical analysis. Investigate the relationship between spino-pelvic parameters and patient self reported outcomes on adult subjects with spinal deformities. It is becoming increasingly recognized that the study of spinal alignment should include pelvic position. While pelvic incidence determines lumbar lordosis, pelvic tilt (PT) is a positional parameter reflecting compensation to spinal deformity. Correlation between plumbline offset (sagittal vertical axis [SVA]) and Health Related Quality of Life (HRQOL) measures has been demonstrated, but such a study is lacking for PT. This prospective study was carried out on 125 adult patients suffering from spinal deformity (mean age: 57 years). Full-length free-standing radiographs including the spine and pelvis were available for all patients. HRQOL instruments included: Oswestry Disability Index, Short Form-12, Scoliosis Research Society. Correlation analysis between radiographic spinopelvic parameters and HRQOL measures was pursued. Correlation analysis revealed no significance pertaining to coronal plane parameters. Significant sagittal plane correlations were identified. SVA and truncal inclination measured by T1 spinopelvic inclination (T1-SPI) (angle between T1-hip axis and vertical) correlated with: Scoliosis Research Society (appearance, activity, total score), Oswestry Disability Index, and Short Form-12 (physical component score). Correlation coefficients ranged from 0.42 < r < 0.55 (P < 0.0001). T1-SPI revealed greater correlation with HRQOL compared to SVA. PT showed correlation with HRQOL (0.28 < r < 0.42) and with SVA (r = 0.64, P < 0.0001). This study confirms that pelvic position measured via PT correlates with HRQOL in the setting of adult deformity. High values of PT express compensatory pelvic retroversion for sagittal spinal malalignment. This study also demonstrates significant T1-SPI correlation with HRQOL measures and outperforms SVA. This parameter carries the

  11. Lumped-parameter fuel rod model for rapid thermal transients

    International Nuclear Information System (INIS)

    Perkins, K.R.; Ramshaw, J.D.

    1975-07-01

    The thermal behavior of fuel rods during simulated accident conditions is extremely sensitive to the heat transfer coefficient which is, in turn, very sensitive to the cladding surface temperature and the fluid conditions. The development of a semianalytical, lumped-parameter fuel rod model which is intended to provide accurate calculations, in a minimum amount of computer time, of the thermal response of fuel rods during a simulated loss-of-coolant accident is described. The results show good agreement with calculations from a comprehensive fuel-rod code (FRAP-T) currently in use at Aerojet Nuclear Company

  12. Taming Many-Parameter BSM Models with Bayesian Neural Networks

    Science.gov (United States)

    Kuchera, M. P.; Karbo, A.; Prosper, H. B.; Sanchez, A.; Taylor, J. Z.

    2017-09-01

    The search for physics Beyond the Standard Model (BSM) is a major focus of large-scale high energy physics experiments. One method is to look for specific deviations from the Standard Model that are predicted by BSM models. In cases where the model has a large number of free parameters, standard search methods become intractable due to computation time. This talk presents results using Bayesian Neural Networks, a supervised machine learning method, to enable the study of higher-dimensional models. The popular phenomenological Minimal Supersymmetric Standard Model was studied as an example of the feasibility and usefulness of this method. Graphics Processing Units (GPUs) are used to expedite the calculations. Cross-section predictions for 13 TeV proton collisions will be presented. My participation in the Conference Experience for Undergraduates (CEU) in 2004-2006 exposed me to the national and global significance of cutting-edge research. At the 2005 CEU, I presented work from the previous summer's SULI internship at Lawrence Berkeley Laboratory, where I learned to program while working on the Majorana Project. That work inspired me to follow a similar research path, which led me to my current work on computational methods applied to BSM physics.

  13. Modelling of bio-optical parameters of open ocean waters

    Directory of Open Access Journals (Sweden)

    Vadim N. Pelevin

    2001-12-01

    Full Text Available An original method for estimating the concentration of chlorophyll pigments, absorption of yellow substance and absorption of suspended matter without pigments and yellow substance in detritus using spectral diffuse attenuation coefficient for downwelling irradiance and irradiance reflectance data has been applied to sea waters of different types in the open ocean (case 1. Using the effective numerical single parameter classification with the water type optical index m as a parameter over the whole range of the open ocean waters, the calculations have been carried out and the light absorption spectra of sea waters tabulated. These spectra are used to optimize the absorption models and thus to estimate the concentrations of the main admixtures in sea water. The value of m can be determined from direct measurements of the downward irradiance attenuation coefficient at 500 nm or calculated from remote sensing data using the regressions given in the article. The sea water composition can then be readily estimated from the tables given for any open ocean area if that one parameter m characterizing the basin is known.

  14. Application of regression model on stream water quality parameters

    International Nuclear Information System (INIS)

    Suleman, M.; Maqbool, F.; Malik, A.H.; Bhatti, Z.A.

    2012-01-01

    Statistical analysis was conducted to evaluate the effect of solid waste leachate from the open solid waste dumping site of Salhad on the stream water quality. Five sites were selected along the stream. Two sites were selected prior to mixing of leachate with the surface water. One was of leachate and other two sites were affected with leachate. Samples were analyzed for pH, water temperature, electrical conductivity (EC), total dissolved solids (TDS), Biological oxygen demand (BOD), chemical oxygen demand (COD), dissolved oxygen (DO) and total bacterial load (TBL). In this study correlation coefficient r among different water quality parameters of various sites were calculated by using Pearson model and then average of each correlation between two parameters were also calculated, which shows TDS and EC and pH and BOD have significantly increasing r value, while temperature and TDS, temp and EC, DO and BL, DO and COD have decreasing r value. Single factor ANOVA at 5% level of significance was used which shows EC, TDS, TCL and COD were significantly differ among various sites. By the application of these two statistical approaches TDS and EC shows strongly positive correlation because the ions from the dissolved solids in water influence the ability of that water to conduct an electrical current. These two parameters significantly vary among 5 sites which are further confirmed by using linear regression. (author)

  15. Distribution-centric 3-parameter thermodynamic models of partition gas chromatography.

    Science.gov (United States)

    Blumberg, Leonid M

    2017-03-31

    If both parameters (the entropy, ΔS, and the enthalpy, ΔH) of the classic van't Hoff model of dependence of distribution coefficients (K) of analytes on temperature (T) are treated as the temperature-independent constants then the accuracy of the model is known to be insufficient for the needed accuracy of retention time prediction. A more accurate 3-parameter Clarke-Glew model offers a way to treat ΔS and ΔH as functions, ΔS(T) and ΔH(T), of T. A known T-centric construction of these functions is based on relating them to the reference values (ΔS ref and ΔH ref ) corresponding to a predetermined reference temperature (T ref ). Choosing a single T ref for all analytes in a complex sample or in a large database might lead to practically irrelevant values of ΔS ref and ΔH ref for those analytes that have too small or too large retention factors at T ref . Breaking all analytes in several subsets each with its own T ref leads to discontinuities in the analyte parameters. These problems are avoided in the K-centric modeling where ΔS(T) and ΔS(T) and other analyte parameters are described in relation to their values corresponding to a predetermined reference distribution coefficient (K Ref ) - the same for all analytes. In this report, the mathematics of the K-centric modeling are described and the properties of several types of K-centric parameters are discussed. It has been shown that the earlier introduced characteristic parameters of the analyte-column interaction (the characteristic temperature, T char , and the characteristic thermal constant, θ char ) are a special chromatographically convenient case of the K-centric parameters. Transformations of T-centric parameters into K-centric ones and vice-versa as well as the transformations of one set of K-centric parameters into another set and vice-versa are described. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Assessing FPAR Source and Parameter Optimization Scheme in Application of a Diagnostic Carbon Flux Model

    Energy Technology Data Exchange (ETDEWEB)

    Turner, D P; Ritts, W D; Wharton, S; Thomas, C; Monson, R; Black, T A

    2009-02-26

    The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors. FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.

  17. Estimate of Seismological Parameters for the 1908 Messina Earthquake Through a new Data set Within the SISMOS Project.

    Science.gov (United States)

    Palombo, B.; Ferrari, G.; Bernardi, F.; Hunstad, I.; Perniola, B.

    2008-12-01

    The 1908 earthquake is one of the most catastrophic events in Italian history, recorded by most of the historical seismic stations existing at that time. Some of the seismograms recorded by these stations have already been used by many authors for the purpose of studying source characteristics, although only copies of the original recordings were available. Thanks to the Euroseismos project (2002-2007) and to the Sismos project, most of the original data (seismogram recordings and instrument parameter calibrations) for these events are now available in digital formats. Sismos technical facilities now allow us to apply the modern methods of digital-data analysis for the earthquakes recorded by mechanical and electromagnetic seismographs. The Sismos database has recently acquired many original seismograms and related instrumental parameters for the 1908 Messina earthquake, recorded by 14 stations distributed worldwide and never before used in previous works. We have estimated the main event parameters (i.e. location, Ms, Mw and focal mechanism) with the new data set. The aim of our work is to provide the scientific community with a reliable size and source estimation for accurate and consistent seismic hazard evaluation in Sicily, a region characterized by high long-term seismicity.

  18. Convergence of surface diffusion parameters with model crystal size

    Science.gov (United States)

    Cohen, Jennifer M.; Voter, Arthur F.

    1994-07-01

    A study of the variation in the calculated quantities for adatom diffusion with respect to the size of the model crystal is presented. The reported quantities include surface diffusion barrier heights, pre-exponential factors, and dynamical correction factors. Embedded atom method (EAM) potentials were used throughout this effort. Both the layer size and the depth of the crystal were found to influence the values of the Arrhenius factors significantly. In particular, exchange type mechanisms required a significantly larger model than standard hopping mechanisms to determine adatom diffusion barriers of equivalent accuracy. The dynamical events that govern the corrections to transition state theory (TST) did not appear to be as sensitive to crystal depth. Suitable criteria for the convergence of the diffusion parameters with regard to the rate properties are illustrated.

  19. Constraining model parameters on remotely sensed evaporation: justification for distribution in ungauged basins?

    Directory of Open Access Journals (Sweden)

    H. C. Winsemius

    2008-12-01

    Full Text Available In this study, land surface related parameter distributions of a conceptual semi-distributed hydrological model are constrained by employing time series of satellite-based evaporation estimates during the dry season as explanatory information. The approach has been applied to the ungauged Luangwa river basin (150 000 (km2 in Zambia. The information contained in these evaporation estimates imposes compliance of the model with the largest outgoing water balance term, evaporation, and a spatially and temporally realistic depletion of soil moisture within the dry season. The model results in turn provide a better understanding of the information density of remotely sensed evaporation. Model parameters to which evaporation is sensitive, have been spatially distributed on the basis of dominant land cover characteristics. Consequently, their values were conditioned by means of Monte-Carlo sampling and evaluation on satellite evaporation estimates. The results show that behavioural parameter sets for model units with similar land cover are indeed clustered. The clustering reveals hydrologically meaningful signatures in the parameter response surface: wetland-dominated areas (also called dambos show optimal parameter ranges that reflect vegetation with a relatively small unsaturated zone (due to the shallow rooting depth of the vegetation which is easily moisture stressed. The forested areas and highlands show parameter ranges that indicate a much deeper root zone which is more drought resistent. Clustering was consequently used to formulate fuzzy membership functions that can be used to constrain parameter realizations in further calibration. Unrealistic parameter ranges, found for instance in the high unsaturated soil zone values in the highlands may indicate either overestimation of satellite-based evaporation or model structural deficiencies. We believe that in these areas, groundwater uptake into the root zone and lateral movement of

  20. Piecewise Model and Parameter Obtainment of Governor Actuator in Turbine

    Directory of Open Access Journals (Sweden)

    Jie Zhao

    2015-01-01

    Full Text Available The governor actuators in some heat-engine plants have nonlinear valves. This nonlinearity of valves may lead to the inaccuracy of the opening and closing time constants calculated based on the whole segment fully open and fully close experimental test curves of the valve. An improved mathematical model of the turbine governor actuator is proposed to reflect the nonlinearity of the valve, in which the main and auxiliary piecewise opening and closing time constants instead of the fixed oil motive opening and closing time constants are adopted to describe the characteristics of the actuator. The main opening and closing time constants are obtained from the linear segments of the whole fully open and close curves. The parameters of proportional integral derivative (PID controller are identified based on the small disturbance experimental tests of the valve. Then the auxiliary opening and closing time constants and the piecewise opening and closing valve points are determined by the fully open/close experimental tests. Several testing functions are selected to compare genetic algorithm and particle swarm optimization algorithm (GA-PSO with other basic intelligence algorithms. The effectiveness of the piecewise linear model and its parameters are validated by practical power plant case studies.

  1. Evaluation of J and CTOD (Crack Tip Opening Displacement) fracture parameters for pipeline steels using Single Edge Notch Tension SE(T) specimens

    Energy Technology Data Exchange (ETDEWEB)

    Paredes Tobar, Lenin Marcelo; Ruggieri, Claudio [Universidade de Sao Paulo (USP), SP (Brazil). Escola Politecnica. Dept. de Engenharia Naval e Oceanica

    2009-12-19

    This work presents an evaluation procedure to determine the elastic-plastic J-integral and CTOD for pin-loaded and clamped single edge notch tension (SE(T)) specimens based upon the eta-method. The primary objective is to derive estimation equations applicable to determine J and CTOD fracture parameters for a wide range of a/W-ratios and material flow properties. Very detailed non-linear finite element analyses for plane-strain and full-thickness, 3-D models provide the evolution of load with increased crack mouth opening displacement which is required for the estimation procedure. The present analyses, when taken together with previous studies provide a fairly extensive body of results which serve to determine parameters J and CTOD for different materials using tension specimens with varying geometries. (author)

  2. Standard model parameters and the search for new physics

    International Nuclear Information System (INIS)

    Marciano, W.J.

    1988-04-01

    In these lectures, my aim is to present an up-to-date status report on the standard model and some key tests of electroweak unification. Within that context, I also discuss how and where hints of new physics may emerge. To accomplish those goals, I have organized my presentation as follows: I discuss the standard model parameters with particular emphasis on the gauge coupling constants and vector boson masses. Examples of new physics appendages are also briefly commented on. In addition, because these lectures are intended for students and thus somewhat pedagogical, I have included an appendix on dimensional regularization and a simple computational example that employs that technique. Next, I focus on weak charged current phenomenology. Precision tests of the standard model are described and up-to-date values for the Cabibbo-Kobayashi-Maskawa (CKM) mixing matrix parameters are presented. Constraints implied by those tests for a 4th generation, supersymmetry, extra Z/prime/ bosons, and compositeness are also discussed. I discuss weak neutral current phenomenology and the extraction of sin/sup 2/ /theta//sub W/ from experiment. The results presented there are based on a recently completed global analysis of all existing data. I have chosen to concentrate that discussion on radiative corrections, the effect of a heavy top quark mass, and implications for grand unified theories (GUTS). The potential for further experimental progress is also commented on. I depart from the narrowest version of the standard model and discuss effects of neutrino masses and mixings. I have chosen to concentrate on oscillations, the Mikheyev-Smirnov- Wolfenstein (MSW) effect, and electromagnetic properties of neutrinos. On the latter topic, I will describe some recent work on resonant spin-flavor precession. Finally, I conclude with a prospectus on hopes for the future. 76 refs

  3. On the identification of multiple space dependent ionic parameters in cardiac electrophysiology modelling

    Science.gov (United States)

    Abidi, Yassine; Bellassoued, Mourad; Mahjoub, Moncef; Zemzemi, Nejib

    2018-03-01

    In this paper, we consider the inverse problem of space dependent multiple ionic parameters identification in cardiac electrophysiology modelling from a set of observations. We use the monodomain system known as a state-of-the-art model in cardiac electrophysiology and we consider a general Hodgkin-Huxley formalism to describe the ionic exchanges at the microscopic level. This formalism covers many physiological transmembrane potential models including those in cardiac electrophysiology. Our main result is the proof of the uniqueness and a Lipschitz stability estimate of ion channels conductance parameters based on some observations on an arbitrary subdomain. The key idea is a Carleman estimate for a parabolic operator with multiple coefficients and an ordinary differential equation system.

  4. Time-Varying FOPDT Modeling and On-line Parameter Identification

    DEFF Research Database (Denmark)

    Yang, Zhenyu; Sun, Zhen

    2013-01-01

    A type of Time-Varying First-Order Plus Dead-Time (TV-FOPDT) model is extended from SISO format into a MISO version by explicitly taking the disturbance input into consideration. Correspondingly, a set of on-line parameter identification algorithms oriented to MISO TV-FOPDT model are proposed based...... on the Mixed-Integer-Nonlinear Programming, Least-Mean-Square and sliding window techniques. The proposed approaches can simultaneously estimate the time-dependent system parameters, as well as the unknown disturbance input if it is the case, in an on-line manner. The proposed concepts and algorithms...... are firstly illustrated through a numerical example, and then applied to investigate transient superheat dynamic modeling in a supermarket refrigeration system....

  5. Inverse modeling of hydrologic parameters using surface flux and runoff observations in the Community Land Model

    Science.gov (United States)

    Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. Ruby

    2013-12-01

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Both deterministic least-square fitting and stochastic Markov-chain Monte Carlo (MCMC)-Bayesian inversion approaches are evaluated by applying them to CLM4 at selected sites with different climate and soil conditions. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the sampling-based stochastic inversion approaches provides significant improvements in the model simulations compared to using default CLM4 parameter values, and that as more information comes in, the predictive intervals (ranges of posterior distributions) of the calibrated parameters become narrower. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.

  6. Direct Effective Dose Calculations in Pediatric Fluoroscopy-Guided Abdominal Interventions with Rando-Alderson Phantoms – Optimization of Preset Parameter Settings

    Science.gov (United States)

    Wildgruber, Moritz; Müller-Wille, René; Goessmann, Holger; Uller, Wibke; Wohlgemuth, Walter A.

    2016-01-01

    Objective The aim of the study was to calculate the effective dose during fluoroscopy-guided pediatric interventional procedures of the liver in a phantom model before and after adjustment of preset parameters. Methods Organ doses were measured in three anthropomorphic Rando-Alderson phantoms representing children at various age and body weight (newborn 3.5kg, toddler 10kg, child 19kg). Collimation was performed focusing on the upper abdomen representing mock interventional radiology procedures such as percutaneous transhepatic cholangiography and drainage placement (PTCD). Fluoroscopy and digital subtraction angiography (DSA) acquisitions were performed in a posterior-anterior geometry using a state of the art flat-panel detector. Effective dose was directly measured from multiple incorporated thermoluminescent dosimeters (TLDs) using two different parameter settings. Results Effective dose values for each pediatric phantom were below 0.1mSv per minute fluoroscopy, and below 1mSv for a 1 minute DSA acquisition with a frame rate of 2 f/s. Lowering the values for the detector entrance dose enabled a reduction of the applied effective dose from 12 to 27% for fluoroscopy and 22 to 63% for DSA acquisitions. Similarly, organ doses of radiosensitive organs could be reduced by over 50%, especially when close to the primary x-ray beam. Conclusion Modification of preset parameter settings enabled to decrease the effective dose for pediatric interventional procedures, as determined by effective dose calculations using dedicated pediatric Rando-Alderson phantoms. PMID:27556584

  7. Parameter estimation with a novel gradient-based optimization method for biological lattice-gas cellular automaton models.

    Science.gov (United States)

    Mente, Carsten; Prade, Ina; Brusch, Lutz; Breier, Georg; Deutsch, Andreas

    2011-07-01

    Lattice-gas cellular automata (LGCAs) can serve as stochastic mathematical models for collective behavior (e.g. pattern formation) emerging in populations of interacting cells. In this paper, a two-phase optimization algorithm for global parameter estimation in LGCA models is presented. In the first phase, local minima are identified through gradient-based optimization. Algorithmic differentiation is adopted to calculate the necessary gradient information. In the second phase, for global optimization of the parameter set, a multi-level single-linkage method is used. As an example, the parameter estimation algorithm is applied to a LGCA model for early in vitro angiogenic pattern formation.

  8. Estimating Important Electrode Parameters of High Temperature PEM Fuel Cells By Fitting a Model to Polarisation Curves and Impedance Spectra

    DEFF Research Database (Denmark)

    Vang, Jakob Rabjerg; Zhou, Fan; Andreasen, Søren Juhl

    2015-01-01

    A high temperature PEM (HTPEM) fuel cell model capable of simulating both steady state and dynamic operation is presented. The purpose is to enable extraction of unknown parameters from sets of impedance spectra and polarisation curves. The model is fitted to two polarisation curves and four...

  9. Penerapan Strategi Numbered Head Together dalam Setting Model Pembelajaran STAD

    Directory of Open Access Journals (Sweden)

    Muhammad Mifta Fausan

    2016-07-01

    Full Text Available This study aims to determine the increase of motivation and biology student learning outcomes through the implementation of Numbered Head Together (NHT strategies in setting Student Teams Achievement Division (STAD learning model based Lesson Study (LS. Subjects in this study were students of class X IS 1 MAN 3 Malang. Implementation this study consisted of two cycles and each cycle consisted of three meetings. The data obtained were analyzed using descriptive statistical analysis of qualitative and quantitative descriptive statistics. The research instrument used is the observation sheet, test, monitoring LS sheets and questionnaires. The results of this study indicate that through the implementation of NHT strategy in setting STAD learning model based LS can improve motivation and learning outcomes biology students. Students' motivation in the first cycle of 69% and increased in the second cycle of 86%. While the cognitive learning, classical completeness in the first cycle of 74% and increased in the second cycle to 93%. Affective learning outcomes of students in the first cycle by 93% and increased in the second cycle to 100%. Furthermore, psychomotor learning outcomes of students also increased from 74% in the first cycle to 93% in the second cycle.

  10. Bayesian uncertainty analysis for complex systems biology models: emulation, global parameter searches and evaluation of gene functions.

    Science.gov (United States)

    Vernon, Ian; Liu, Junli; Goldstein, Michael; Rowe, James; Topping, Jen; Lindsey, Keith

    2018-01-02

    Many mathematical models have now been employed across every area of systems biology. These models increasingly involve large numbers of unknown parameters, have complex structure which can result in substantial evaluation time relative to the needs of the analysis, and need to be compared to observed data of various forms. The correct analysis of such models usually requires a global parameter search, over a high dimensional parameter space, that incorporates and respects the most important sources of uncertainty. This can be an extremely difficult task, but it is essential for any meaningful inference or prediction to be made about any biological system. It hence represents a fundamental challenge for the whole of systems biology. Bayesian statistical methodology for the uncertainty analysis of complex models is introduced, which is designed to address the high dimensional global parameter search problem. Bayesian emulators that mimic the systems biology model but which are extremely fast to evaluate are embeded within an iterative history match: an efficient method to search high dimensional spaces within a more formal statistical setting, while incorporating major sources of uncertainty. The approach is demonstrated via application to a model of hormonal crosstalk in Arabidopsis root development, which has 32 rate parameters, for which we identify the sets of rate parameter values that lead to acceptable matches between model output and observed trend data. The multiple insights into the model's structure that this analysis provides are discussed. The methodology is applied to a second related model, and the biological consequences of the resulting comparison, including the evaluation of gene functions, are described. Bayesian uncertainty analysis for complex models using both emulators and history matching is shown to be a powerful technique that can greatly aid the study of a large class of systems biology models. It both provides insight into model behaviour

  11. Modeling error and stability of endothelial cytoskeletal membrane parameters based on modeling transendothelial impedance as resistor and capacitor in series.

    Science.gov (United States)

    Bodmer, James E; English, Anthony; Brady, Megan; Blackwell, Ken; Haxhinasto, Kari; Fotedar, Sunaina; Borgman, Kurt; Bai, Er-Wei; Moy, Alan B

    2005-09-01

    Transendothelial impedance across an endothelial monolayer grown on a microelectrode has previously been modeled as a repeating pattern of disks in which the electrical circuit consists of a resistor and capacitor in series. Although this numerical model breaks down barrier function into measurements of cell-cell adhesion, cell-matrix adhesion, and membrane capacitance, such solution parameters can be inaccurate without understanding model stability and error. In this study, we have evaluated modeling stability and error by using a chi(2) evaluation and Levenberg-Marquardt nonlinear least-squares (LM-NLS) method of the real and/or imaginary data in which the experimental measurement is compared with the calculated measurement derived by the model. Modeling stability and error were dependent on current frequency and the type of experimental data modeled. Solution parameters of cell-matrix adhesion were most susceptible to modeling instability. Furthermore, the LM-NLS method displayed frequency-dependent instability of the solution parameters, regardless of whether the real or imaginary data were analyzed. However, the LM-NLS method identified stable and reproducible solution parameters between all types of experimental data when a defined frequency spectrum of the entire data set was selected on the basis of a criterion of minimizing error. The frequency bandwidth that produced stable solution parameters varied greatly among different data types. Thus a numerical model based on characterizing transendothelial impedance as a resistor and capacitor in series and as a repeating pattern of disks is not sufficient to characterize the entire frequency spectrum of experimental transendothelial impedance.

  12. Performance Analysis of Different NeQuick Ionospheric Model Parameters

    Directory of Open Access Journals (Sweden)

    WANG Ningbo

    2017-04-01

    Full Text Available Galileo adopts NeQuick model for single-frequency ionospheric delay corrections. For the standard operation of Galileo, NeQuick model is driven by the effective ionization level parameter Az instead of the solar activity level index, and the three broadcast ionospheric coefficients are determined by a second-polynomial through fitting the Az values estimated from globally distributed Galileo Sensor Stations (GSS. In this study, the processing strategies for the estimation of NeQuick ionospheric coefficients are discussed and the characteristics of the NeQuick coefficients are also analyzed. The accuracy of Global Position System (GPS broadcast Klobuchar, original NeQuick2 and fitted NeQuickC as well as Galileo broadcast NeQuickG models is evaluated over the continental and oceanic regions, respectively, in comparison with the ionospheric total electron content (TEC provided by global ionospheric maps (GIM, GPS test stations and JASON-2 altimeter. The results show that NeQuickG can mitigate ionospheric delay by 54.2%~65.8% on a global scale, and NeQuickC can correct for 71.1%~74.2% of the ionospheric delay. NeQuick2 performs at the same level with NeQuickG, which is a bit better than that of GPS broadcast Klobuchar model.

  13. Environmental monitoring: setting alert and action limits based on a zero-inflated model.

    Science.gov (United States)

    Yang, Harry; Zhao, Wei; O'day, Terrence; Fleming, William

    2013-01-01

    The primary purpose of an environmental monitoring program is to provide oversight for microbiological cleanliness of manufacturing operation and document the state of control of the facility. Key to the success of the program is the establishment of alert and action limits. In practice, several statistical methods including normal, Poisson, and negative binomial modeling have been routinely used to set these limits. However, data collected from clean rooms or controlled locations often display excess of zeros and overdispersion, caused by sampling population heterogeneity. Such data render it inappropriate to use the traditional methods to set alert and action levels. In this paper, a method based on a zero-inflated negative binomial model is proposed for the above instances. The method provides an enhanced alternative for trending environmental data of classified rooms, and it is demonstrated to show a clear improvement in terms of model fitting and parameter estimation. The primary purpose of an environmental monitoring program is to provide oversight for microbiological cleanliness of manufacturing operation and document the state of control of the facility. Key to the success of the program is the establishment of alert and action limits. In practice, several statistical methods including normal, Poisson, and negative binomial modeling have been routinely used to set these limits. However, data collected from clean rooms or controlled locations often display excess of zeros and overdispersion, caused by sampling population heterogeneity. Such data render it inappropriate to use the traditional methods to set alert and action levels. In this paper, a method based on a zero-inflated negative binomial model is proposed for the above instances. The method provides an enhanced alternative for trending environmental data of classified rooms, and it is demonstrated to show a clear improvement in terms of model fitting and parameter estimation.

  14. Assessing composition and structure of soft biphasic media from Kelvin-Voigt fractional derivative model parameters

    Science.gov (United States)

    Zhang, Hongmei; Wang, Yue; Fatemi, Mostafa; Insana, Michael F.

    2017-03-01

    Kelvin-Voigt fractional derivative (KVFD) model parameters have been used to describe viscoelastic properties of soft tissues. However, translating model parameters into a concise set of intrinsic mechanical properties related to tissue composition and structure remains challenging. This paper begins by exploring these relationships using a biphasic emulsion materials with known composition. Mechanical properties are measured by analyzing data from two indentation techniques—ramp-stress relaxation and load-unload hysteresis tests. Material composition is predictably correlated with viscoelastic model parameters. Model parameters estimated from the tests reveal that elastic modulus E 0 closely approximates the shear modulus for pure gelatin. Fractional-order parameter α and time constant τ vary monotonically with the volume fraction of the material’s fluid component. α characterizes medium fluidity and the rate of energy dissipation, and τ is a viscous time constant. Numerical simulations suggest that the viscous coefficient η is proportional to the energy lost during quasi-static force-displacement cycles, E A . The slope of E A versus η is determined by α and the applied indentation ramp time T r. Experimental measurements from phantom and ex vivo liver data show close agreement with theoretical predictions of the η -{{E}A} relation. The relative error is less than 20% for emulsions 22% for liver. We find that KVFD model parameters form a concise features space for biphasic medium characterization that described time-varying mechanical properties. The experimental work was carried out at the Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA. Methodological development, including numerical simulation and all data analysis, were carried out at the school of Life Science and Technology, Xi’an JiaoTong University, 710049, China.

  15. Analysis of root growth from a phenotyping data set using a density-based model.

    Science.gov (United States)

    Kalogiros, Dimitris I; Adu, Michael O; White, Philip J; Broadley, Martin R; Draye, Xavier; Ptashnyk, Mariya; Bengough, A Glyn; Dupuy, Lionel X

    2016-02-01

    Major research efforts are targeting the improved performance of root systems for more efficient use of water and nutrients by crops. However, characterizing root system architecture (RSA) is challenging, because roots are difficult objects to observe and analyse. A model-based analysis of RSA traits from phenotyping image data is presented. The model can successfully back-calculate growth parameters without the need to measure individual roots. The mathematical model uses partial differential equations to describe root system development. Methods based on kernel estimators were used to quantify root density distributions from experimental image data, and different optimization approaches to parameterize the model were tested. The model was tested on root images of a set of 89 Brassica rapa L. individuals of the same genotype grown for 14 d after sowing on blue filter paper. Optimized root growth parameters enabled the final (modelled) length of the main root axes to be matched within 1% of their mean values observed in experiments. Parameterized values for elongation rates were within ±4% of the values measured directly on images. Future work should investigate the time dependency of growth parameters using time-lapse image data. The approach is a potentially powerful quantitative technique for identifying crop genotypes with more efficient root systems, using (even incomplete) data from high-throughput phenotyping systems. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  16. Inference of missing data and chemical model parameters using experimental statistics

    Science.gov (United States)

    Casey, Tiernan; Najm, Habib

    2017-11-01

    A method for determining the joint parameter density of Arrhenius rate expressions through the inference of missing experimental data is presented. This approach proposes noisy hypothetical data sets from target experiments and accepts those which agree with the reported statistics, in the form of nominal parameter values and their associated uncertainties. The data exploration procedure is formalized using Bayesian inference, employing maximum entropy and approximate Bayesian computation methods to arrive at a joint density on data and parameters. The method is demonstrated in the context of reactions in the H2-O2 system for predictive modeling of combustion systems of interest. Work supported by the US DOE BES CSGB. Sandia National Labs is a multimission lab managed and operated by Nat. Technology and Eng'g Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell Intl, for the US DOE NCSA under contract DE-NA-0003525.

  17. Application of multi-parameter chorus and plasmaspheric hiss wave models in radiation belt modeling

    Science.gov (United States)

    Aryan, H.; Kang, S. B.; Balikhin, M. A.; Fok, M. C. H.; Agapitov, O. V.; Komar, C. M.; Kanekal, S. G.; Nagai, T.; Sibeck, D. G.

    2017-12-01

    Numerical simulation studies of the Earth's radiation belts are important to understand the acceleration and loss of energetic electrons. The Comprehensive Inner Magnetosphere-Ionosphere (CIMI) model along with many other radiation belt models require inputs for pitch angle, energy, and cross diffusion of electrons, due to chorus and plasmaspheric hiss waves. These parameters are calculated using statistical wave distribution models of chorus and plasmaspheric hiss amplitudes. In this study we incorporate recently developed multi-parameter chorus and plasmaspheric hiss wave models based on geomagnetic index and solar wind parameters. We perform CIMI simulations for two geomagnetic storms and compare the flux enhancement of MeV electrons with data from the Van Allen Probes and Akebono satellites. We show that the relativistic electron fluxes calculated with multi-parameter wave models resembles the observations more accurately than the relativistic electron fluxes calculated with single-parameter wave models. This indicates that wave models based on a combination of geomagnetic index and solar wind parameters are more effective as inputs to radiation belt models.

  18. Information behavior versus communication: application models in multidisciplinary settings

    Directory of Open Access Journals (Sweden)

    Cecília Morena Maria da Silva

    2015-05-01

    Full Text Available This paper deals with the information behavior as support for models of communication design in the areas of Information Science, Library and Music. The communication models proposition is based on models of Tubbs and Moss (2003, Garvey and Griffith (1972, adapted by Hurd (1996 and Wilson (1999. Therefore, the questions arose: (i what are the informational skills required of librarians who act as mediators in scholarly communication process and informational user behavior in the educational environment?; (ii what are the needs of music related researchers and as produce, seek, use and access the scientific knowledge of your area?; and (iii as the contexts involved in scientific collaboration processes influence in the scientific production of information science field in Brazil? The article includes a literature review on the information behavior and its insertion in scientific communication considering the influence of context and/or situation of the objects involved in motivating issues. The hypothesis is that the user information behavior in different contexts and situations influence the definition of a scientific communication model. Finally, it is concluded that the same concept or a set of concepts can be used in different perspectives, reaching up, thus, different results.

  19. KFUPM-KAUST Red Sea model: Digital viscoelastic depth model and synthetic seismic data set

    KAUST Repository

    Al-Shuhail, Abdullatif A.

    2017-06-01

    The Red Sea is geologically interesting due to its unique structures and abundant mineral and petroleum resources, yet no digital geologic models or synthetic seismic data of the Red Sea are publicly available for testing algorithms to image and analyze the area\\'s interesting features. This study compiles a 2D viscoelastic model of the Red Sea and calculates a corresponding multicomponent synthetic seismic data set. The models and data sets are made publicly available for download. We hope this effort will encourage interested researchers to test their processing algorithms on this data set and model and share their results publicly as well.

  20. Classical algorithms for automated parameter-search methods in compartmental neural models - A critical survey based on simulations using neuron

    International Nuclear Information System (INIS)

    Mutihac, R.; Mutihac, R.C.; Cicuttin, A.

    2001-09-01

    Parameter-search methods are problem-sensitive. All methods depend on some meta-parameters of their own, which must be determined experimentally in advance. A better choice of these intrinsic parameters for a certain parameter-search method may improve its performance. Moreover, there are various implementations of the same method, which may also affect its performance. The choice of the matching (error) function has a great impact on the search process in terms of finding the optimal parameter set and minimizing the computational cost. An initial assessment of the matching function ability to distinguish between good and bad models is recommended, before launching exhaustive computations. However, different runs of a parameter search method may result in the same optimal parameter set or in different parameter sets (the model is insufficiently constrained to accurately characterize the real system). Robustness of the parameter set is expressed by the extent to which small perturbations in the parameter values are not affecting the best solution. A parameter set that is not robust is unlikely to be physiologically relevant. Robustness can also be defined as the stability of the optimal parameter set to small variations of the inputs. When trying to estimate things like the minimum, or the least-squares optimal parameters of a nonlinear system, the existence of multiple local minima can cause problems with the determination of the global optimum. Techniques such as Newton's method, the Simplex method and Least-squares Linear Taylor Differential correction technique can be useful provided that one is lucky enough to start sufficiently close to the global minimum. All these methods suffer from the inability to distinguish a local minimum from a global one because they follow the local gradients towards the minimum, even if some methods are resetting the search direction when it is likely to get stuck in presumably a local minimum. Deterministic methods based on

  1. The effects of computed tomography scanner parameters on the quality of the reverse triangular surface model of the fibula

    Energy Technology Data Exchange (ETDEWEB)

    Hayat, Nasir; Ahmad, Mushtaq, E-mail: nasirhayat@uet.edu.pk [Faculty of Mechanical Engineering, UET, Lahore (Pakistan)

    2016-01-15

    This study investigates the effects of computed tomography (CT) parameters on the quality and size of the reverse triangular surface model with an objective of obtaining an accurate 3D triangular surface model of complex-shaped customized objects for reverse engineering and many other applications such as surgical planning and finite element analysis. For this purpose, the fibula of a human knee joint was CT scanned by changing various parameters (slice thickness, slice spacing, pixel size, X-ray tube current and helical pitch) over wide ranges. Three-dimensional triangular surface models were created from point cloud data extracted from the CT image data. To assess the influences of scanning parameters on the surface quality and accuracy, the resulting surface models were qualitatively compared based on various anatomical features. Statistical analysis was used to quantify the deviations of surface models with different scanning parameter levels from the reference CT surface model. The results show that these parameters to a varying degree affect the surface quality, reproduction of various anatomical details and size of the resulting surface model. Moreover, these parameters are highly dependent on each other. Interactive effects of these parameters have been discussed and recommendations have been made for parameter settings. The results of the study would help to improve the accuracy of the 3D surface models required for customized implants and other applications. (author)

  2. Ecohydrological model parameter selection for stream health evaluation.

    Science.gov (United States)

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Ross, Dennis M; Zhang, Zhen; Wang, Lizhu; Esfahanian, Abdol-Hossein

    2015-04-01

    Variable selection is a critical step in development of empirical stream health prediction models. This study develops a framework for selecting important in-stream variables to predict four measures of biological integrity: total number of Ephemeroptera, Plecoptera, and Trichoptera (EPT) taxa, family index of biotic integrity (FIBI), Hilsenhoff biotic integrity (HBI), and fish index of biotic integrity (IBI). Over 200 flow regime and water quality variables were calculated using the Hydrologic Index Tool (HIT) and Soil and Water Assessment Tool (SWAT). Streams of the River Raisin watershed in Michigan were grouped using the Strahler stream classification system (orders 1-3 and orders 4-6), k-means clustering technique (two clusters: C1 and C2), and all streams (one grouping). For each grouping, variable selection was performed using Bayesian variable selection, principal component analysis, and Spearman's rank correlation. Following selection of best variable sets, models were developed to predict the measures of biological integrity using adaptive-neuro fuzzy inference systems (ANFIS), a technique well-suited to complex, nonlinear ecological problems. Multiple unique variable sets were identified, all which differed by selection method and stream grouping. Final best models were mostly built using the Bayesian variable selection method. The most effective stream grouping method varied by health measure, although k-means clustering and grouping by stream order were always superior to models built without grouping. Commonly selected variables were related to streamflow magnitude, rate of change, and seasonal nitrate concentration. Each best model was effective in simulating stream health observations, with EPT taxa validation R2 ranging from 0.67 to 0.92, FIBI ranging from 0.49 to 0.85, HBI from 0.56 to 0.75, and fish IBI at 0.99 for all best models. The comprehensive variable selection and modeling process proposed here is a robust method that extends our

  3. Parameters-related uncertainty in modeling sugar cane yield with an agro-Land Surface Model

    Science.gov (United States)

    Valade, A.; Ciais, P.; Vuichard, N.; Viovy, N.; Ruget, F.; Gabrielle, B.

    2012-12-01

    Agro-Land Surface Models (agro-LSM) have been developed from the coupling of specific crop models and large-scale generic vegetation models. They aim at accounting for the spatial distribution and variability of energy, water and carbon fluxes within soil-vegetation-atmosphere continuum with a particular emphasis on how crop phenology and agricultural management practice influence the turbulent fluxes exchanged with the atmosphere, and the underlying water and carbon pools. A part of the uncertainty in these models is related to the many parameters included in the models' equations. In this study, we quantify the parameter-based uncertainty in the simulation of sugar cane biomass production with the agro-LSM ORCHIDEE-STICS on a multi-regional approach with data from sites in Australia, La Reunion and Brazil. First, the main source of uncertainty for the output variables NPP, GPP, and sensible heat flux (SH) is determined through a screening of the main parameters of the model on a multi-site basis leading to the selection of a subset of most sensitive parameters causing most of the uncertainty. In a second step, a sensitivity analysis is carried out on the parameters selected from the screening analysis at a regional scale. For this, a Monte-Carlo sampling method associated with the calculation of Partial Ranked Correlation Coefficients is used. First, we quantify the sensitivity of the output variables to individual input parameters on a regional scale for two regions of intensive sugar cane cultivation in Australia and Brazil. Then, we quantify the overall uncertainty in the simulation's outputs propagated from the uncertainty in the input parameters. Seven parameters are identified by the screening procedure as driving most of the uncertainty in the agro-LSM ORCHIDEE-STICS model output at all sites. These parameters control photosynthesis (optimal temperature of photosynthesis, optimal carboxylation rate), radiation interception (extinction coefficient), root

  4. Are subject-specific musculoskeletal models robust to the uncertainties in parameter identification?

    Science.gov (United States)

    Valente, Giordano; Pitto, Lorenzo; Testi, Debora; Seth, Ajay; Delp, Scott L; Stagni, Rita; Viceconti, Marco; Taddei, Fulvia

    2014-01-01

    Subject-specific musculoskeletal modeling can be applied to study musculoskeletal disorders, allowing inclusion of personalized anatomy and properties. Independent of the tools used for model creation, there are unavoidable uncertainties associated with parameter identification, whose effect on model predictions is still not fully understood. The aim of the present study was to analyze the sensitivity of subject-specific model predictions (i.e., joint angles, joint moments, muscle and joint contact forces) during walking to the uncertainties in the identification of body landmark positions, maximum muscle tension and musculotendon geometry. To this aim, we created an MRI-based musculoskeletal model of the lower limbs, defined as a 7-segment, 10-degree-of-freedom articulated linkage, actuated by 84 musculotendon units. We then performed a Monte-Carlo probabilistic analysis perturbing model parameters according to their uncertainty, and solving a typical inverse dynamics and static optimization problem using 500 models that included the different sets of perturbed variable values. Model creation and gait simulations were performed by using freely available software that we developed to standardize the process of model creation, integrate with OpenSim and create probabilistic simulations of movement. The uncertainties in input variables had a moderate effect on model predictions, as muscle and joint contact forces showed maximum standard deviation of 0.3 times body-weight and maximum range of 2.1 times body-weight. In addition, the output variables significantly correlated with few input variables (up to 7 out of 312) across the gait cycle, including the geometry definition of larger muscles and the maximum muscle tension in limited gait portions. Although we found subject-specific models not markedly sensitive to parameter identification, researchers should be aware of the model precision in relation to the intended application. In fact, force predictions could be

  5. Are subject-specific musculoskeletal models robust to the uncertainties in parameter identification?

    Directory of Open Access Journals (Sweden)

    Giordano Valente

    Full Text Available Subject-specific musculoskeletal modeling can be applied to study musculoskeletal disorders, allowing inclusion of personalized anatomy and properties. Independent of the tools used for model creation, there are unavoidable uncertainties associated with parameter identification, whose effect on model predictions is still not fully understood. The aim of the present study was to analyze the sensitivity of subject-specific model predictions (i.e., joint angles, joint moments, muscle and joint contact forces during walking to the uncertainties in the identification of body landmark positions, maximum muscle tension and musculotendon geometry. To this aim, we created an MRI-based musculoskeletal model of the lower limbs, defined as a 7-segment, 10-degree-of-freedom articulated linkage, actuated by 84 musculotendon units. We then performed a Monte-Carlo probabilistic analysis perturbing model parameters according to their uncertainty, and solving a typical inverse dynamics and static optimization problem using 500 models that included the different sets of perturbed variable values. Model creation and gait simulations were performed by using freely available software that we developed to standardize the process of model creation, integrate with OpenSim and create probabilistic simulations of movement. The uncertainties in input variables had a moderate effect on model predictions, as muscle and joint contact forces showed maximum standard deviation of 0.3 times body-weight and maximum range of 2.1 times body-weight. In addition, the output variables significantly correlated with few input variables (up to 7 out of 312 across the gait cycle, including the geometry definition of larger muscles and the maximum muscle tension in limited gait portions. Although we found subject-specific models not markedly sensitive to parameter identification, researchers should be aware of the model precision in relation to the intended application. In fact, force

  6. The electronic disability record: purpose, parameters, and model use case.

    Science.gov (United States)

    Tulu, Bengisu; Horan, Thomas A

    2009-01-01

    The active engagement of consumers is an important factor in achieving widespread success of health information systems. The disability community represents a major segment of the healthcare arena, with more than 50 million Americans experiencing some form of disability. In keeping with the "consumer-driven" approach to e-health systems, this paper considers the distinctive aspects of electronic and personal health record use by this segment of society. Drawing upon the information shared during two national policy forums on this topic, the authors present the concept of Electronic Disability Records (EDR). The authors outline the purpose and parameters of such records, with specific attention to its ability to organize health and financial data in a manner that can be used to expedite the disability determination process. In doing so, the authors discuss its interaction with Electronic Health Records (EHR) and Personal Health Records (PHR). The authors then draw upon these general parameters to outline a model use case for disability determination and discuss related implications for disability health management. The paper further reports on the subsequent considerations of these and related deliberations by the American Health Information Community (AHIC).

  7. The S-parameter in Holographic Technicolor Models

    CERN Document Server

    Agashe, Kaustubh; Grojean, Christophe; Reece, Matthew

    2007-01-01

    We study the S parameter, considering especially its sign, in models of electroweak symmetry breaking (EWSB) in extra dimensions, with fermions localized near the UV brane. Such models are conjectured to be dual to 4D strong dynamics triggering EWSB. The motivation for such a study is that a negative value of S can significantly ameliorate the constraints from electroweak precision data on these models, allowing lower mass scales (TeV or below) for the new particles and leading to easier discovery at the LHC. We first extend an earlier proof of S>0 for EWSB by boundary conditions in arbitrary metric to the case of general kinetic functions for the gauge fields or arbitrary kinetic mixing. We then consider EWSB in the bulk by a Higgs VEV showing that S is positive for arbitrary metric and Higgs profile, assuming that the effects from higher-dimensional operators in the 5D theory are sub-leading and can therefore be neglected. For the specific case of AdS_5 with a power law Higgs profile, we also show that S ~ ...

  8. Extracting Structure Parameters of Dimers for Molecular Tunneling Ionization Model

    Science.gov (United States)

    Zhao, Song-Feng; Huang, Fang; Wang, Guo-Li; Zhou, Xiao-Xin

    2016-03-01

    We determine structure parameters of the highest occupied molecular orbital (HOMO) of 27 dimers for the molecular tunneling ionization (so called MO-ADK) model of Tong et al. [Phys. Rev. A 66 (2002) 033402]. The molecular wave functions with correct asymptotic behavior are obtained by solving the time-independent Schrödinger equation with B-spline functions and molecular potentials which are numerically created using the density functional theory. We examine the alignment-dependent tunneling ionization probabilities from MO-ADK model for several molecules by comparing with the molecular strong-field approximation (MO-SFA) calculations. We show the molecular Perelomov–Popov–Terent'ev (MO-PPT) can successfully give the laser wavelength dependence of ionization rates (or probabilities). Based on the MO-PPT model, two diatomic molecules having valence orbital with antibonding systems (i.e., Cl2, Ne2) show strong ionization suppression when compared with their corresponding closest companion atoms. Supported by National Natural Science Foundation of China under Grant Nos. 11164025, 11264036, 11465016, 11364038, the Specialized Research Fund for the Doctoral Program of Higher Education of China under Grant No. 20116203120001, and the Basic Scientific Research Foundation for Institution of Higher Learning of Gansu Province

  9. Sound propagation and absorption in foam - A distributed parameter model.

    Science.gov (United States)

    Manson, L.; Lieberman, S.

    1971-01-01

    Liquid-base foams are highly effective sound absorbers. A better understanding of the mechanisms of sound absorption in foams was sought by exploration of a mathematical model of bubble pulsation and coupling and the development of a distributed-parameter mechanical analog. A solution by electric-circuit analogy was thus obtained and transmission-line theory was used to relate the physical properties of the foams to the characteristic impedance and propagation constants of the analog transmission line. Comparison of measured physical properties of the foam with values obtained from measured acoustic impedance and propagation constants and the transmission-line theory showed good agreement. We may therefore conclude that the sound propagation and absorption mechanisms in foam are accurately described by the resonant response of individual bubbles coupled to neighboring bubbles.

  10. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  11. Coupled 1D-2D hydrodynamic inundation model for sewer overflow: Influence of modeling parameters

    Directory of Open Access Journals (Sweden)

    Adeniyi Ganiyu Adeogun

    2015-10-01

    Full Text Available This paper presents outcome of our investigation on the influence of modeling parameters on 1D-2D hydrodynamic inundation model for sewer overflow, developed through coupling of an existing 1D sewer network model (SWMM and 2D inundation model (BREZO. The 1D-2D hydrodynamic model was developed for the purpose of examining flood incidence due to surcharged water on overland surface. The investigation was carried out by performing sensitivity analysis on the developed model. For the sensitivity analysis, modeling parameters, such as mesh resolution Digital Elevation Model (DEM resolution and roughness were considered. The outcome of the study shows the model is sensitive to changes in these parameters. The performance of the model is significantly influenced, by the Manning's friction value, the DEM resolution and the area of the triangular mesh. Also, changes in the aforementioned modeling parameters influence the Flood characteristics, such as the inundation extent, the flow depth and the velocity across the model domain.

  12. Modeling the outflow of liquid with initial supercritical parameters using the relaxation model for condensation

    Directory of Open Access Journals (Sweden)

    Lezhnin Sergey

    2017-01-01

    Full Text Available The two-temperature model of the outflow from a vessel with initial supercritical parameters of medium has been realized. The model uses thermodynamic non-equilibrium relaxation approach to describe phase transitions. Based on a new asymptotic model for computing the relaxation time, the outflow of water with supercritical initial pressure and super- and subcritical temperatures has been calculated.

  13. Bayesian parameter estimation for the Wnt pathway: an infinite mixture models approach.

    Science.gov (United States)

    Koutroumpas, Konstantinos; Ballarini, Paolo; Votsi, Irene; Cournède, Paul-Henry

    2016-09-01

    Likelihood-free methods, like Approximate Bayesian Computation (ABC), have been extensively used in model-based statistical inference with intractable likelihood functions. When combined with Sequential Monte Carlo (SMC) algorithms they constitute a powerful approach for parameter estimation and model selection of mathematical models of complex biological systems. A crucial step in the ABC-SMC algorithms, significantly affecting their performance, is the propagation of a set of parameter vectors through a sequence of intermediate distributions using Markov kernels. In this article, we employ Dirichlet process mixtures (DPMs) to design optimal transition kernels and we present an ABC-SMC algorithm with DPM kernels. We illustrate the use of the proposed methodology using real data for the canonical Wnt signaling pathway. A multi-compartment model of the pathway is developed and it is compared to an existing model. The results indicate that DPMs are more efficient in the exploration of the parameter space and can significantly improve ABC-SMC performance. In comparison to alternative sampling schemes that are commonly used, the proposed approach can bring potential benefits in the estimation of complex multimodal distributions. The method is used to estimate the parameters and the initial state of two models of the Wnt pathway and it is shown that the multi-compartment model fits better the experimental data. Python scripts for the Dirichlet Process Gaussian Mixture model and the Gibbs sampler are available at https://sites.google.com/site/kkoutroumpas/software konstantinos.koutroumpas@ecp.fr. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Experimental validation of analytical models for a rapid determination of cycle parameters in thermoplastic injection molding

    Science.gov (United States)

    Pignon, Baptiste; Sobotka, Vincent; Boyard, Nicolas; Delaunay, Didier

    2017-10-01

    Two different analytical models were presented to determine cycle parameters of thermoplastics injection process. The aim of these models was to provide quickly a first set of data for mold temperature and cooling time. The first model is specific to amorphous polymers and the second one is dedicated to semi-crystalline polymers taking the crystallization into account. In both cases, the nature of the contact between the polymer and the mold could be considered as perfect or not (thermal contact resistance was considered). Results from models are compared with experimental data obtained with an instrumented mold for an acrylonitrile butadiene styrene (ABS) and a polypropylene (PP). Good agreements were obtained for mold temperature variation and for heat flux. In the case of the PP, the analytical crystallization times were compared with those given by a coupled model between heat transfer and crystallization kinetics.

  15. Theoretical justification of space-mapping-based modeling utilizing a database and on-demand parameter extraction

    DEFF Research Database (Denmark)

    Koziel, Slawomir; Bandler, John W.; Madsen, Kaj

    2006-01-01

    We present a theoretical justification of a recently introduced surrogate modeling methodology based on space mapping that relies on an available data base and on-demand parameter extraction. Fine model data, the so-called base set, is assumed available in the region of interest. To evaluate...... the surrogate, we perform parameter extraction with weighting coefficients dependent on the distance between the point of interest and base points. We provide theoretical results showing that the new methodology can assure any accuracy that is required (provided the base set is dense enough), which...

  16. Evolution of autocatalytic sets in a competitive percolation model

    International Nuclear Information System (INIS)

    Zhang, Renquan; Pei, Sen; Wei, Wei; Zheng, Zhiming

    2014-01-01

    The evolution of autocatalytic sets (ACSs) is a widespread process in biological, chemical, and ecological systems and is of great significance in many applications such as the evolution of new species or complex chemical organization. In this paper, we propose a competitive model with an m-selection rule in which an abrupt emergence of a macroscopic independent ACS is observed. By numerical simulations, we find that the maximal increase of the size grows linearly with the system size. We analytically derive the threshold t α where the explosive transition occurs and verify it by simulations. Moreover, our analysis explains how this giant independent ACS grows and reveals that, as the selection rule becomes stricter, the phase transition is dramatically postponed, and the number of the largest independent ACSs coexisting in the system increases accordingly. Our result indicates that suppression during evolution could lead to the abrupt appearance of giant ACSs. (paper)

  17. Mathematical Modelling with Fuzzy Sets of Sustainable Tourism Development

    Directory of Open Access Journals (Sweden)

    Nenad Stojanović

    2011-10-01

    Full Text Available In the first part of the study we introduce fuzzy sets that correspond to comparative indicators for measuring sustainable development of tourism. In the second part of the study it is shown, on the base of model created, how one can determine the value of sustainable tourism development in protected areas based on the following established groups of indicators: to assess the economic status, to assess the impact of tourism on the social component, to assess the impact of tourism on cultural identity, to assess the environmental conditions and indicators as well as to assess tourist satisfaction, all using fuzzy logic.It is also shown how to test the confidence in the rules by which, according to experts, appropriate decisions can be created in order to protect biodiversity of protected areas.

  18. Parameters Online Detection and Model Predictive Control during the Grain Drying Process

    Directory of Open Access Journals (Sweden)

    Lihui Zhang

    2013-01-01

    Full Text Available In order to improve the grain drying quality and automation level, combined with the structural characteristics of the cross-flow circulation grain dryer designed and developed by us, the temperature, moisture, and other parameters measuring sensors were placed on the dryer, to achieve online automatic detection of process parameters during the grain drying process. A drying model predictive control system was set up. A grain dry predictive control model at constant velocity and variable temperature was established, in which the entire process was dried at constant velocity (i.e., precipitation rate per hour is a constant and variable temperature. Combining PC with PLC, and based on LabVIEW, a system control platform was designed.

  19. On material modelling, identification of material parameters and application to two benchmark exercises

    International Nuclear Information System (INIS)

    Laemmer, H.; Diegele, E.

    2000-01-01

    The thermoviscoplastic model of finite deformation thermoviscoplasticity, presented in 1997, and the identification of material parameters as given in 1998 was applied to two benchmark exercises within the REVISA (Reactor Vessel Integrity in Severe Accidents) project in 1999. Starting from a simplified version of the theory which only includes the kinematic hardening assumption new sets of parameters were identified for 16MND5 reactor pressure vessel steel from simple tensile and creep tests. The model implemented in the ABAQUS finite element code was applied to two exercises. The first was a benchmark exercise which follows the loading conditions of the RUPTURE experiment number 15 as performed at CEA. The numerical analysis was compared to the experimental data. The second example was a scenario of small hot spot and external cooling by radiation. (orig.) [de

  20. Constructing Ebola transmission chains from West Africa and estimating model parameters using internet sources.

    Science.gov (United States)

    Pettey, W B P; Carter, M E; Toth, D J A; Samore, M H; Gundlapalli, A V

    2017-07-01

    During the recent Ebola crisis in West Africa, individual person-level details of disease onset, transmissions, and outcomes such as survival or death were reported in online news media. We set out to document disease transmission chains for Ebola, with the goal of generating a timely account that could be used for surveillance, mathematical modeling, and public health decision-making. By accessing public web pages only, such as locally produced newspapers and blogs, we created a transmission chain involving two Ebola clusters in West Africa that compared favorably with other published transmission chains, and derived parameters for a mathematical model of Ebola disease transmission that were not statistically different from those derived from published sources. We present a protocol for responsibly gleaning epidemiological facts, transmission model parameters, and useful details from affected communities using mostly indigenously produced sources. After comparing our transmission parameters to published parameters, we discuss additional benefits of our method, such as gaining practical information about the affected community, its infrastructure, politics, and culture. We also briefly compare our method to similar efforts that used mostly non-indigenous online sources to generate epidemiological information.

  1. Dependence of tropical cyclone development on coriolis parameter: A theoretical model

    Science.gov (United States)

    Deng, Liyuan; Li, Tim; Bi, Mingyu; Liu, Jia; Peng, Melinda

    2018-03-01

    A simple theoretical model was formulated to investigate how tropical cyclone (TC) intensification depends on the Coriolis parameter. The theoretical framework includes a two-layer free atmosphere and an Ekman boundary layer at the bottom. The linkage between the free atmosphere and the boundary layer is through the Ekman pumping vertical velocity in proportion to the vorticity at the top of the boundary layer. The closure of this linear system assumes a simple relationship between the free atmosphere diabatic heating and the boundary layer moisture convergence. Under a set of realistic atmospheric parameter values, the model suggests that the most preferred latitude for TC development is around 5° without considering other factors. The theoretical result is confirmed by high-resolution WRF model simulations in a zero-mean flow and a constant SST environment on an f -plane with different Coriolis parameters. Given an initially balanced weak vortex, the TC-like vortex intensifies most rapidly at the reference latitude of 5°. Thus, the WRF model simulations confirm the f-dependent characteristics of TC intensification rate as suggested by the theoretical model.

  2. Application of evolutionary algorithms to optimize the model parameters of casting cooling process

    Directory of Open Access Journals (Sweden)

    S. Kluska-Nawarecka

    2010-10-01

    Full Text Available One of the most commonly used methods of numerical simulation is the finite element method (FEM. Its popularity is reflected in thenumber of tools supporting the preparation of simulation models. However, despite its usefulness, FEM is often very troublesome in use;the problem is the selection of the finite element mesh or shape function. In addition, MES assumes a complete knowledge of thesimulated process and of the parameters describing the investigated phenomena, including model geometry, boundary conditions, physicalparameters, and mathematical model describing these phenomena. A comparison of the data obtained from physical experiments andsimulations indicates an inaccuracy, which may result from the incorrectly chosen shape of element or geometry of the grid. Theapplication of computational intelligence methods, combined with knowledge of the manufacturing technology of metal products, shouldallow an efficient selection of parameters of the mathematical models and, as a consequence, more precise control of the process of thecasting solidification and cooling to ensure the required quality. The designed system has been integrated with the existing simulationenvironment, which will significantly facilitate the preparation and implementation of calculations of this type. Moreover, the use of adistributed model will significantly reduce the time complexity of calculations, requiring multiple repetition of complex simulations toestimate the quality of the different sets of parameters.

  3. Misspecification in Latent Change Score Models: Consequences for Parameter Estimation, Model Evaluation, and Predicting Change.

    Science.gov (United States)

    Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P

    2018-01-01

    Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.

  4. Assessing the relative importance of parameter and forcing uncertainty and their interactions in conceptual hydrological model simulations

    Science.gov (United States)

    Mockler, E. M.; Chun, K. P.; Sapriza-Azuri, G.; Bruen, M.; Wheater, H. S.

    2016-11-01

    Predictions of river flow dynamics provide vital information for many aspects of water management including water resource planning, climate adaptation, and flood and drought assessments. Many of the subjective choices that modellers make including model and criteria selection can have a significant impact on the magnitude and distribution of the output uncertainty. Hydrological modellers are tasked with understanding and minimising the uncertainty surrounding streamflow predictions before communicating the overall uncertainty to decision makers. Parameter uncertainty in conceptual rainfall-runoff models has been widely investigated, and model structural uncertainty and forcing data have been receiving increasing attention. This study aimed to assess uncertainties in streamflow predictions due to forcing data and the identification of behavioural parameter sets in 31 Irish catchments. By combining stochastic rainfall ensembles and multiple parameter sets for three conceptual rainfall-runoff models, an analysis of variance model was used to decompose the total uncertainty in streamflow simulations into contributions from (i) forcing data, (ii) identification of model parameters and (iii) interactions between the two. The analysis illustrates that, for our subjective choices, hydrological model selection had a greater contribution to overall uncertainty, while performance criteria selection influenced the relative intra-annual uncertainties in streamflow predictions. Uncertainties in streamflow predictions due to the method of determining parameters were relatively lower for wetter catchments, and more evenly distributed throughout the year when the Nash-Sutcliffe Efficiency of logarithmic values of flow (lnNSE) was the evaluation criterion.

  5. Sensitivity analysis and parameter estimation for distributed hydrological modeling: potential of variational methods

    Directory of Open Access Journals (Sweden)

    W. Castaings

    2009-04-01

    Full Text Available Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised with respect to model inputs.

    In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations but didactic application case.

    It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run and the singular value decomposition (SVD of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation.

    For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers is adopted.

    Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.

  6. Physical basis and potential estimation techniques for soil erosion parameters in the Precipitation-Runoff Modeling System (PRMS)

    Science.gov (United States)

    Carey, W.P.; Simon, Andrew

    1984-01-01

    Simulation of upland-soil erosion by the Precipitation-Runoff Modeling System currently requires the user to estimate two rainfall detachment parameters and three hydraulic detachmment paramenters. One rainfall detachment parameter can be estimated from rainfall simulator tests. A reformulation of the rainfall detachment equation allows the second parameter to be computed directly. The three hydraulic detachment parameters consist of one exponent and two coefficients. The initial value of the exponent is generally set equal to 1.5. The two coefficients are functions of the soil 's resistance to erosion and one of the two also accounts for sediment delivery processes not simulated in the model. Initial estimates of these parameters can be derived from other modeling studies or from published empirical relations. (USGS)

  7. A cooperative strategy for parameter estimation in large scale systems biology models.

    Science.gov (United States)

    Villaverde, Alejandro F; Egea, Jose A; Banga, Julio R

    2012-06-22

    Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs ("threads") that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and

  8. A cooperative strategy for parameter estimation in large scale systems biology models

    Directory of Open Access Journals (Sweden)

    Villaverde Alejandro F

    2012-06-01

    Full Text Available Abstract Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS, is presented. Its key feature is the cooperation between different programs (“threads” that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS. Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here

  9. Assigning probability distributions to input parameters of performance assessment models

    International Nuclear Information System (INIS)

    Mishra, Srikanta

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available

  10. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  11. Optimal Estimation of Phenological Crop Model Parameters for Rice (Oryza sativa)

    Science.gov (United States)

    Sharifi, H.; Hijmans, R. J.; Espe, M.; Hill, J. E.; Linquist, B.

    2015-12-01

    Crop phenology models are important components of crop growth models. In the case of phenology models, generally only a few parameters are calibrated and default cardinal temperatures are used which can lead to a temperature-dependent systematic phenology prediction error. Our objective was to evaluate different optimization approaches in the Oryza2000 and CERES-Rice phenology sub-models to assess the importance of optimizing cardinal temperatures on model performance and systematic error. We used two optimization approaches: the typical single-stage (planting to heading) and three-stage model optimization (for planting to panicle initiation (PI), PI to heading (HD), and HD to physiological maturity (MT)) to simultaneously optimize all model parameters. Data for this study was collected over three years and six locations on seven California rice cultivars. A temperature-dependent systematic error was found for all cultivars and stages, however it was generally small (systematic error Oryza2000 and from 6.6 to 3.8 in CERES-Rice. With regards to systematic error, we found a trade-off between RMSE and systematic error when optimization objective set to minimize RMSE or systematic error. Therefore, it is important to find the limits within which the trade-offs between RMSE and systematic error are acceptable, especially in climate change studies where this can prevent erroneous conclusions.

  12. Uncertainty Analysis Using Optimization and Direct Parameter Sampling With Correlation for a Physically Based Contaminant Transport Model

    Science.gov (United States)

    Sykes, J. F.; Yin, Y.

    2008-12-01

    Due to the ill-posed nature of contaminant transport models, inverse modeling and traditional gradient-based optimization approaches often encounter difficulties when applied to real case studies. The correlation of the transport parameters must be included in uncertainty analyses. In this study, a physically based transient groundwater flow model was developed to establish the historical relationship between a contaminant site and the down gradient municipal well field. The parameters for the three-dimensional transient groundwater flow model were calibrated using both punctual data over a thirty-year time period and approximately nine years of head data from continuous well records. Spatially and temporally varying recharge was incorporated in the model to account for water level fluctuations in observation wells. Given the spatially and temporally varying velocities, the six contaminant transport parameters of dispersivities, retardation, initial source concentration and source decay coefficient were estimated using a multi-start PEST algorithm that combined the traditional gradient search approach with a heuristic technique. The feature of multi-start partially resolved the issue of the locality of optimum. The study also compared a Dynamically Dimensioned Search (DDS) algorithm to the multi-start PEST algorithm. A modified Latin Hypercube (LHC) sampling approach accounting for correlation between parameters was employed to conduct an uncertainty analysis for contaminant concentration breakthrough at pumping wells. The LHC sampling can be operated using the multivariate normal distribution for each parameter in which correlations among parameters are specified through optimization and form part of the corresponding probability space. Because of the non-uniqueness issue for ill-posed problems, multiple feasible transport parameter sets and covariance matrices were generated using the mutli-start PEST algorithm. The likelihood for each parameter set was estimated

  13. GEMSFITS: Code package for optimization of geochemical model parameters and inverse modeling

    International Nuclear Information System (INIS)

    Miron, George D.; Kulik, Dmitrii A.; Dmytrieva, Svitlana V.; Wagner, Thomas

    2015-01-01

    Highlights: • Tool for generating consistent parameters against various types of experiments. • Handles a large number of experimental data and parameters (is parallelized). • Has a graphical interface and can perform statistical analysis on the parameters. • Tested on fitting the standard state Gibbs free energies of aqueous Al species. • Example on fitting interaction parameters of mixing models and thermobarometry. - Abstract: GEMSFITS is a new code package for fitting internally consistent input parameters of GEM (Gibbs Energy Minimization) geochemical–thermodynamic models against various types of experimental or geochemical data, and for performing inverse modeling tasks. It consists of the gemsfit2 (parameter optimizer) and gfshell2 (graphical user interface) programs both accessing a NoSQL database, all developed with flexibility, generality, efficiency, and user friendliness in mind. The parameter optimizer gemsfit2 includes the GEMS3K chemical speciation solver ( (http://gems.web.psi.ch/GEMS3K)), which features a comprehensive suite of non-ideal activity- and equation-of-state models of solution phases (aqueous electrolyte, gas and fluid mixtures, solid solutions, (ad)sorption. The gemsfit2 code uses the robust open-source NLopt library for parameter fitting, which provides a selection between several nonlinear optimization algorithms (global, local, gradient-based), and supports large-scale parallelization. The gemsfit2 code can also perform comprehensive statistical analysis of the fitted parameters (basic statistics, sensitivity, Monte Carlo confidence intervals), thus supporting the user with powerful tools for evaluating the quality of the fits and the physical significance of the model parameters. The gfshell2 code provides menu-driven setup of optimization options (data selection, properties to fit and their constraints, measured properties to compare with computed counterparts, and statistics). The practical utility, efficiency, and

  14. Parameter Estimation in Rainfall-Runoff Modelling Using Distributed Versions of Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Michala Jakubcová

    2015-01-01

    Full Text Available The presented paper provides the analysis of selected versions of the particle swarm optimization (PSO algorithm. The tested versions of the PSO were combined with the shuffling mechanism, which splits the model population into complexes and performs distributed PSO optimization. One of them is a new proposed PSO modification, APartW, which enhances the global exploration and local exploitation in the parametric space during the optimization process through the new updating mechanism applied on the PSO inertia weight. The performances of four selected PSO methods were tested on 11 benchmark optimization problems, which were prepared for the special session on single-objective real-parameter optimization CEC 2005. The results confirm that the tested new APartW PSO variant is comparable with other existing distributed PSO versions, AdaptW and LinTimeVarW. The distributed PSO versions were developed for finding the solution of inverse problems related to the estimation of parameters of hydrological model Bilan. The results of the case study, made on the selected set of 30 catchments obtained from MOPEX database, show that tested distributed PSO versions provide suitable estimates of Bilan model parameters and thus can be used for solving related inverse problems during the calibration process of studied water balance hydrological model.

  15. Influence of the noise model on level set active contour segmentation.

    Science.gov (United States)

    Martin, Pascal; Réfrégier, Philippe; Goudail, François; Guérault, Frédéric

    2004-06-01

    We analyze level set implementation of region snakes based on the maximum likelihood method for different noise models that belong to the exponential family. We show that this approach can improve segmentation results in noisy images and we demonstrate that the regularization term can be efficiently determined using an information theory-based approach, i.e., the minimum description length principle. The criterion to be optimized has no free parameter to be tuned by the user and the obtained segmentation technique is adapted to nonsimply connected objects.

  16. Modeling and Parameter Estimation of Spacecraft Fuel Slosh with Diaphragms Using Pendulum Analogs

    Science.gov (United States)

    Chatman, Yadira; Gangadharan, Sathya; Schlee, Keith; Ristow, James; Suderman, James; Walker, Charles; Hubert, Carl

    2007-01-01

    Prediction and control of liquid slosh in moving containers is an important consideration in the design of spacecraft and launch vehicle control systems. Even with modern computing systems, CFD type simulations are not fast enough to allow for large scale Monte Carlo analyses of spacecraft and launch vehicle dynamic behavior with slosh included. It is still desirable to use some type of simplified mechanical analog for the slosh to shorten computation time. Analytic determination of the slosh analog parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices such as elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks, these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the hand-derived equations of motion for the mechanical analog are evaluated and their results compared with the experimental results. This paper will describe efforts by the university component of a team comprised of NASA's Launch Services Program, Embry Riddle Aeronautical University, Southwest Research Institute and Hubert Astronautics to improve the accuracy and efficiency of modeling techniques used to predict these types of motions. Of particular interest is the effect of diaphragms and bladders on the slosh dynamics and how best to model these devices. The previous research was an effort to automate the process of slosh model parameter identification using a MATLAB/SimMechanics-based computer simulation. These results are the first step in applying the same computer estimation to a full-size tank and vehicle propulsion system. The introduction of diaphragms to this experimental set-up will aid in a better and more complete prediction of fuel slosh characteristics and behavior. Automating the

  17. Implications of the subjectivity in hydrologic model choice and parameter identification on the portrayal of climate change impact

    Science.gov (United States)

    Mendoza, Pablo; Clark, Martyn; Rajagopalan, Balaji; MIzukami, Naoki; Gutmann, Ethan; Newman, Andy; Barlage, Michael; Brekke, Levi; Arnold, Jeffrey

    2014-05-01

    Climate change studies involve several methodological choices that affect the hydrological sensitivities obtained, including emission scenarios, climate models, downscaling techniques and hydrologic modeling approaches. Among these, hydrologic model structure selection (i.e. the set of equations that describe catchment processes) and parameter identification are particularly relevant and usually have a strong subjective component. This subjectivity is not only limited to engineering applications, but also extends to many of our research studies, resulting in problems such as missing processes in our models, inappropriate parameterizations and compensatory effects of model parameters (i.e. getting the right answers for the wrong reasons). The goal of this research is to assess the impact of our modeling decisions on projected changes in water balance and catchment behavior for future climate scenarios. Additionally, we aim to better understand the relative importance of hydrologic model structures and parameters on the portrayal of climate change impact. Therefore, we compare hydrologic sensitivities coming from four different models structures (PRMS, VIC, Noah and Noah-MP) with those coming from parameter sets identified using different decisions related to model calibration (objective function, multiple local optima and calibration forcing dataset). We found that both model structure selection and parameter estimation strategy (objective function and forcing dataset) affect the direction and magnitude of climate change signal. Furthermore, the relative effect of subjective decisions on projected variations of catchment behavior depends on the hydrologic signature measure analyzed. Finally, parameter sets with similar values of the objective function may not affect current and future changes in water balance, but may lead to very different sensitivities in hydrologic behavior.

  18. Eliciting hyperparameters of prior distributions for the parameters of paired comparison models

    Directory of Open Access Journals (Sweden)

    Nasir Abbas

    2013-02-01

    Full Text Available Normal 0 false false false EN-US X-NONE AR-SA In the study of paired comparisons (PC, items may be ranked or issues may be prioritized through subjective assessment of certain judges. PC models are developed and then used to serve the purpose of ranking. The PC models may be studied through classical or Bayesian approach. Bayesian inference is a modern statistical technique used to draw conclusions about the population parameters. Its beauty lies in incorporating prior information about the parameters into the analysis in addition to current information (i.e. data. The prior and current information are formally combined to yield a posterior distribution about the population parameters, which is the work bench of the Bayesian statisticians. However, the problems the Bayesians face correspond to the selection and formal utilization of prior distribution. Once the type of prior distribution is decided to be used, the problem of estimating the parameters of the prior distribution (i.e. elicitation still persists. Different methods are devised to serve the purpose. In this study an attempt is made to use Minimum Chi-square (hence forth MCS for the elicitation purpose. Though it is a classical estimation technique, but is used here for the election purpose. The entire elicitation procedure is illustrated through a numerical data set.

  19. House thermal model parameter estimation method for Model Predictive Control applications

    NARCIS (Netherlands)

    van Leeuwen, Richard Pieter; de Wit, J.B.; Fink, J.; Smit, Gerardus Johannes Maria

    In this paper we investigate thermal network models with different model orders applied to various Dutch low-energy house types with high and low interior thermal mass and containing floor heating. Parameter estimations are performed by using data from TRNSYS simulations. The paper discusses results

  20. Testing Departure from Additivity in Tukey’s Model using Shrinkage: Application to a Longitudinal Setting

    Science.gov (United States)

    Ko, Yi-An; Mukherjee, Bhramar; Smith, Jennifer A.; Park, Sung Kyun; Kardia, Sharon L.R.; Allison, Matthew A.; Vokonas, Pantel S.; Chen, Jinbo; Diez-Roux, Ana V.

    2014-01-01

    While there has been extensive research developing gene-environment interaction (GEI) methods in case-control studies, little attention has been given to sparse and efficient modeling of GEI in longitudinal studies. In a two-way table for GEI with rows and columns as categorical variables, a conventional saturated interaction model involves estimation of a specific parameter for each cell, with constraints ensuring identifiability. The estimates are unbiased but are potentially inefficient because the number of parameters to be estimated can grow quickly with increasing categories of row/column factors. On the other hand, Tukey’s one degree of freedom (df) model for non-additivity treats the interaction term as a scaled product of row and column main effects. Due to the parsimonious form of interaction, the interaction estimate leads to enhanced efficiency and the corresponding test could lead to increased power. Unfortunately, Tukey’s model gives biased estimates and low power if the model is misspecified. When screening multiple GEIs where each genetic and environmental marker may exhibit a distinct interaction pattern, a robust estimator for interaction is important for GEI detection. We propose a shrinkage estimator for interaction effects that combines estimates from both Tukey’s and saturated interaction models and use the corresponding Wald test for testing interaction in a longitudinal setting. The proposed estimator is robust to misspecification of interaction structure. We illustrate the proposed methods using two longitudinal studies — the Normative Aging Study and the Multi-Ethnic Study of Atherosclerosis. PMID:25112650

  1. The LXCat project: Electron scattering cross sections and swarm parameters for low temperature plasma modeling

    International Nuclear Information System (INIS)

    Pancheshnyi, S.; Biagi, S.; Bordage, M.C.; Hagelaar, G.J.M.; Morgan, W.L.; Phelps, A.V.; Pitchford, L.C.

    2012-01-01

    Graphical abstract: LXCat is an open-access website containing data needed for low temperature plasma modeling as well as on-line tools useful for their manipulation. Highlights: ► LXCat: an open-access website with data for low temperature plasma modeling. ► Contains compilations of electron scattering cross sections and transport data. ► Data from different contributors for many neutral, ground-state species. ► On-line tools for browsing, plotting, up/downloading data. ► On-line Boltzmann solver for calculating electron swarm parameters. - Abstract: LXCat is a dynamic, open-access, website for collecting, displaying, and downloading ELECtron SCATtering cross sections and swarm parameters (mobility, diffusion coefficient, reaction rates, etc.) required for modeling low temperature, non-equilibrium plasmas. Contributors set up individual databases, and the available databases, indicated by the contributor’s chosen title, include mainly complete sets of electron-neutral scattering cross sections, although the option for introducing partial sets of cross sections exists. A database for measured swarm parameters is also part of LXCat, and this is a growing activity. On-line tools include options for browsing, plotting, and downloading cross section data. The electron energy distribution functions (edfs) in low temperature plasmas are in general non-Maxwellian, and LXCat provides an option for execution of an on-line Boltzmann equation solver to calculate the edf in homogeneous electric fields. Thus, the user can obtain electron transport and rate coefficients (averages over the edfs) in pure gases or gas mixtures over a range of values of the reduced electric fields strength, E/N, the ratio of the electric field strength to the neutral density, using cross sections from the available databases. New contributors are welcome and anyone wishing to create a database and upload data can request a username and password. LXCat is part of a larger, community

  2. Key parameters of the sediment surface morphodynamics in an estuary - An assessment of model solutions

    Science.gov (United States)

    Sampath, D. M. R.; Boski, T.

    2018-05-01

    Large-scale geomorphological evolution of an estuarine system was simulated by means of a hybrid estuarine sedimentation model (HESM) applied to the Guadiana Estuary, in Southwest Iberia. The model simulates the decadal-scale morphodynamics of the system under environmental forcing, using a set of analytical solutions to simplified equations of tidal wave propagation in shallow waters, constrained by empirical knowledge of estuarine sedimentary dynamics and topography. The key controlling parameters of the model are bed friction (f), current velocity power of the erosion rate function (N), and sea-level rise rate. An assessment of sensitivity of the simulated sediment surface elevation (SSE) change to these controlling parameters was performed. The model predicted the spatial differentiation of accretion and erosion, the latter especially marked in the mudflats within mean sea level and low tide level and accretion was mainly in a subtidal channel. The average SSE change mutually depended on both the friction coefficient and power of the current velocity. Analysis of the average annual SSE change suggests that the state of intertidal and subtidal compartments of the estuarine system vary differently according to the dominant processes (erosion and accretion). As the Guadiana estuarine system shows dominant erosional behaviour in the context of sea-level rise and sediment supply reduction after the closure of the Alqueva Dam, the most plausible sets of parameter values for the Guadiana Estuary are N = 1.8 and f = 0.8f0, or N = 2 and f = f0, where f0 is the empirically estimated value. For these sets of parameter values, the relative errors in SSE change did not exceed ±20% in 73% of simulation cells in the studied area. Such a limit of accuracy can be acceptable for an idealized modelling of coastal evolution in response to uncertain sea-level rise scenarios in the context of reduced sediment supply due to flow regulation. Therefore, the idealized but cost

  3. A Novel Clinical Test for Setting Intermittent Pneumatic Compression Parameters Based on Edema Fluid Hydromechanics in the Lymphedematous Calf.

    Science.gov (United States)

    Zaleska, Marzanna; Olszewski, Waldemar L; Durlik, Marek; Kaczmarek, Mariusz

    2015-09-01

    Long-term observations confirm lasting effects and lack of complications of intermittent pneumatic compression (IPC) therapy. So far, no test has been designed that would provide data necessary for setting pressure and time parameters of the IPC device to obtain optimum decrease in limb volume. To design a test providing data on decrease of circumference under the inflated chamber in time depending on the applied compression pressure. One chamber was placed above the ankle joint and inflated to 120 mmHg in order to occlude tissue fluid backflow during inflation of the proximally located test chamber. The latter was inflated sequentially to 50, 80, 100, and 120 mmHg, for 1-3 minutes each. Calf circumference changes were recorded continuously using the plethysmographic strain gauges placed under and proximally to the inflated chamber. Four different types of the recorded circumference change curves were observed during inflation of the test chamber. The first was decrease under and increase proximally to the inflated chamber, another showed decrease under the inflated chamber and little change proximally, the third small decrease under the chamber but increase proximally, and the fourth no change under and proximally. Depending on the steepness of the obtained curves, pressures and timing of IPC device were increased to values bringing about edema fluid mobilization. The two-chamber inflation-deflation test provides plethysmographic data on the circumference changes during calf IPC, time necessary to obtain optimum decrease of circumference, and an insight into tissue elasticity. These data are useful for setting the compression devices at levels bringing about a decrease in limb swelling as well as may be of prognostic value with respect to the efficacy of long-term use of IPC.

  4. Model-Based Material Parameter Estimation for Terahertz Reflection Spectroscopy

    Science.gov (United States)

    Kniffin, Gabriel Paul

    Many materials such as drugs and explosives have characteristic spectral signatures in the terahertz (THz) band. These unique signatures imply great promise for spectral detection and classification using THz radiation. While such spectral features are most easily observed in transmission, real-life imaging systems will need to identify materials of interest from reflection measurements, often in non-ideal geometries. One important, yet commonly overlooked source of signal corruption is the etalon effect -- interference phenomena caused by multiple reflections from dielectric layers of packaging and clothing likely to be concealing materials of interest in real-life scenarios. This thesis focuses on the development and implementation of a model-based material parameter estimation technique, primarily for use in reflection spectroscopy, that takes the influence of the etalon effect into account. The technique is adapted from techniques developed for transmission spectroscopy of thin samples and is demonstrated using measured data taken at the Northwest Electromagnetic Research Laboratory (NEAR-Lab) at Portland State University. Further tests are conducted, demonstrating the technique's robustness against measurement noise and common sources of error.

  5. Geomagnetically induced currents in Uruguay: Sensitivity to modelling parameters

    Science.gov (United States)

    Caraballo, R.

    2016-11-01

    According to the traditional wisdom, geomagnetically induced currents (GIC) should occur rarely at mid-to-low latitudes, but in the last decades a growing number of reports have addressed their effects on high-voltage (HV) power grids at mid-to-low latitudes. The growing trend to interconnect national power grids to meet regional integration objectives, may lead to an increase in the size of the present energy transmission networks to form a sort of super-grid at continental scale. Such a broad and heterogeneous super-grid can be exposed to the effects of large GIC if appropriate mitigation actions are not taken into consideration. In the present study, we present GIC estimates for the Uruguayan HV power grid during severe magnetic storm conditions. The GIC intensities are strongly dependent on the rate of variation of the geomagnetic field, conductivity of the ground, power grid resistances and configuration. Calculated GIC are analysed as functions of these parameters. The results show a reasonable agreement with measured data in Brazil and Argentina, thus confirming the reliability of the model. The expansion of the grid leads to a strong increase in GIC intensities in almost all substations. The power grid response to changes in ground conductivity and resistances shows similar results in a minor extent. This leads us to consider GIC as a non-negligible phenomenon in South America. Consequently, GIC must be taken into account in mid-to-low latitude power grids as well.

  6. A spline-based regression parameter set for creating customized DARTEL MRI brain templates from infancy to old age

    Directory of Open Access Journals (Sweden)

    Marko Wilke

    2018-02-01

    Full Text Available This dataset contains the regression parameters derived by analyzing segmented brain MRI images (gray matter and white matter from a large population of healthy subjects, using a multivariate adaptive regression splines approach. A total of 1919 MRI datasets ranging in age from 1–75 years from four publicly available datasets (NIH, C-MIND, fCONN, and IXI were segmented using the CAT12 segmentation framework, writing out gray matter and white matter images normalized using an affine-only spatial normalization approach. These images were then subjected to a six-step DARTEL procedure, employing an iterative non-linear registration approach and yielding increasingly crisp intermediate images. The resulting six datasets per tissue class were then analyzed using multivariate adaptive regression splines, using the CerebroMatic toolbox. This approach allows for flexibly modelling smoothly varying trajectories while taking into account demographic (age, gender as well as technical (field strength, data quality predictors. The resulting regression parameters described here can be used to generate matched DARTEL or SHOOT templates for a given population under study, from infancy to old age. The dataset and the algorithm used to generate it are publicly available at https://irc.cchmc.org/software/cerebromatic.php. Keywords: MRI template creation, Multivariate adaptive regression splines, DARTEL, Structural MRI

  7. Evaluation of physiological parameters and their influence on doses calculated from two alternative dosimetric models for the gastrointestinal tract

    International Nuclear Information System (INIS)

    Lessard, E.T.; Skrable, K.W.

    1981-01-01

    Two dosimetric models, the catenary compartmental model and the slug flow model are examined using three sets of physiological parameters. The impact of physiological parameters on the dosimetry of the tract is illustrated by comparing calculated maximum permissible daily activity ingestion rates for single, unabsorbed, particle emitting radionuclides with an effective energy term of unity. The conclusions drawn from this intercomparison of six different cases are: (1) Current dosimetric models which use physiological parameters described in this article do not significantly disagree, and (2) For the determination of average dose equivalent rates to segments of the tract due to chronic, long term ingestion of any radionuclide, the catenary compartmental model is a mathematically simpler approach. The catenary model in addition has certain advantages for the calculation of the photon dose contribution to one segment from cumulated activity (disintegrations) in another segment

  8. Parameter and State Estimator for State Space Models

    Directory of Open Access Journals (Sweden)

    Ruifeng Ding

    2014-01-01

    Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.

  9. Parameter and state estimator for state space models.

    Science.gov (United States)

    Ding, Ruifeng; Zhuang, Linfan

    2014-01-01

    This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.

  10. A novel approach to parameter uncertainty analysis of hydrological models using neural networks

    Directory of Open Access Journals (Sweden)

    D. P. Solomatine

    2009-07-01

    Full Text Available In this study, a methodology has been developed to emulate a time consuming Monte Carlo (MC simulation by using an Artificial Neural Network (ANN for the assessment of model parametric uncertainty. First, MC simulation of a given process model is run. Then an ANN is trained to approximate the functional relationships between the input variables of the process model and the synthetic uncertainty descriptors estimated from the MC realizations. The trained ANN model encapsulates the underlying characteristics of the parameter uncertainty and can be used to predict uncertainty descriptors for the new data vectors. This approach was validated by comparing the uncertainty descriptors in the verification data set with those obtained by the MC simulation. The method is applied to estimate the parameter uncertainty of a lumped conceptual hydrological model, HBV, for the Brue catchment in the United Kingdom. The results are quite promising as the prediction intervals estimated by the ANN are reasonably accurate. The proposed techniques could be useful in real time applications when it is not practicable to run a large number of simulations for complex hydrological models and when the forecast lead time is very short.

  11. PARAMETER ESTIMATION IN NON-HOMOGENEOUS BOOLEAN MODELS: AN APPLICATION TO PLANT DEFENSE RESPONSE

    Directory of Open Access Journals (Sweden)

    Maria Angeles Gallego

    2014-11-01

    Full Text Available Many medical and biological problems require to extract information from microscopical images. Boolean models have been extensively used to analyze binary images of random clumps in many scientific fields. In this paper, a particular type of Boolean model with an underlying non-stationary point process is considered. The intensity of the underlying point process is formulated as a fixed function of the distance to a region of interest. A method to estimate the parameters of this Boolean model is introduced, and its performance is checked in two different settings. Firstly, a comparative study with other existent methods is done using simulated data. Secondly, the method is applied to analyze the longleaf data set, which is a very popular data set in the context of point processes included in the R package spatstat. Obtained results show that the new method provides as accurate estimates as those obtained with more complex methods developed for the general case. Finally, to illustrate the application of this model and this method, a particular type of phytopathological images are analyzed. These images show callose depositions in leaves of Arabidopsis plants. The analysis of callose depositions, is very popular in the phytopathological literature to quantify activity of plant immunity.

  12. Determiner use in Italian Swedish and Italian German children: Do Swedish and German represent the same parameter setting?

    Directory of Open Access Journals (Sweden)

    Tanja Kupisch

    2008-02-01

    Full Text Available In this article we compare the acquisition of determiners in bilingual children acquiring Italian simultaneously with German or Swedish. We are concerned with cross-linguistic differences in the rate of acquisition and we discuss in particular the Nominal Mapping Parameter, a model according to which the syntax-semantics interface is crucial in acquisition and which predicts similar developmental patterns for children acquiring a Germanic language. We show that Swedish determiners are acquired more easily than German determiners, which implies that predictions for developmental patterns should not be based on syntactic factors alone, but must make reference to typological differences in morphology and phonology. Furthermore, we show that the acquisition of Italian determiners is affected positively by the simultaneous acquisition of Swedish but that no such effect arises when Italian is acquired simultaneously with German.

  13. Modeling the Mechanical Response of In Vivo Human Skin Under a Rich Set of Deformations

    KAUST Repository

    Flynn, Cormac

    2011-03-11

    Determining the mechanical properties of an individual\\'s skin is important in the fields of pathology, biomedical device design, and plastic surgery. To address this need, we present a finite element model that simulates the skin of the anterior forearm and posterior upper arm under a rich set of three-dimensional deformations. We investigated the suitability of the Ogden and Tong and Fung strain energy functions along with a quasi-linear viscoelastic law. Using non-linear optimization techniques, we found material parameters and in vivo pre-stresses for different volunteers. The model simulated the experiments with errors-of-fit ranging from 13.7 to 21.5%. Pre-stresses ranging from 28 to 92 kPa were estimated. We show that using only in-plane experimental data in the parameter optimization results in a poor prediction of the out-of-plane response. The identifiability of the model parameters, which are evaluated using different determinability criteria, improves by increasing the number of deformation orientations in the experiments. © 2011 Biomedical Engineering Society.

  14. Estimating Route Choice Models from Stochastically Generated Choice Sets on Large-Scale Networks Correcting for Unequal Sampling Probability

    DEFF Research Database (Denmark)

    Vacca, Alessandro; Prato, Carlo Giacomo; Meloni, Italo

    2015-01-01

    is the dependency of the parameter estimates from the choice set generation technique. Bias introduced in model estimation has been corrected only for the random walk algorithm, which has problematic applicability to large-scale networks. This study proposes a correction term for the sampling probability of routes...

  15. THREE-PARAMETER CREEP DAMAGE CONSTITUTIVE MODEL AND ITS APPLICATION IN HYDRAULIC TUNNELLING

    OpenAIRE

    Luo Gang; Chen Liang

    2016-01-01

    Rock deformation is a time-dependent process, generally referred to as rheology. Especially for soft rock strata, design and construction of tunnel shall take full account of rheological properties of adjoining rocks. Based on classic three-parameter HK model (generalized Kelvin model), this paper proposes a three-parameter H-K damage model of which parameters attenuate with increase of equivalent strain, provides attenuation equation of model parameters in the first, second and third stage o...

  16. Integrated model for pricing, delivery time setting, and scheduling in make-to-order environments

    Science.gov (United States)

    Garmdare, Hamid Sattari; Lotfi, M. M.; Honarvar, Mahboobeh

    2018-03-01

    Usually, in make-to-order environments which work only in response to the customer's orders, manufacturers for maximizing the profits should offer the best price and delivery time for an order considering the existing capacity and the customer's sensitivity to both the factors. In this paper, an integrated approach for pricing, delivery time setting and scheduling of new arrival orders are proposed based on the existing capacity and accepted orders in system. In the problem, the acquired market demands dependent on the price and delivery time of both the manufacturer and its competitors. A mixed-integer non-linear programming model is presented for the problem. After converting to a pure non-linear model, it is validated through a case study. The efficiency of proposed model is confirmed by comparing it to both the literature and the current practice. Finally, sensitivity analysis for the key parameters is carried out.

  17. α-Decomposition for estimating parameters in common cause failure modeling based on causal inference

    International Nuclear Information System (INIS)

    Zheng, Xiaoyu; Yamaguchi, Akira; Takata, Takashi

    2013-01-01

    The traditional α-factor model has focused on the occurrence frequencies of common cause failure (CCF) events. Global α-factors in the α-factor model are defined as fractions of failure probability for particular groups of components. However, there are unknown uncertainties in the CCF parameters estimation for the scarcity of available failure data. Joint distributions of CCF parameters are actually determined by a set of possible causes, which are characterized by CCF-triggering abilities and occurrence frequencies. In the present paper, the process of α-decomposition (Kelly-CCF method) is developed to learn about sources of uncertainty in CCF parameter estimation. Moreover, it aims to evaluate CCF risk significances of different causes, which are named as decomposed α-factors. Firstly, a Hybrid Bayesian Network is adopted to reveal the relationship between potential causes and failures. Secondly, because all potential causes have different occurrence frequencies and abilities to trigger dependent failures or independent failures, a regression model is provided and proved by conditional probability. Global α-factors are expressed by explanatory variables (causes’ occurrence frequencies) and parameters (decomposed α-factors). At last, an example is provided to illustrate the process of hierarchical Bayesian inference for the α-decomposition process. This study shows that the α-decomposition method can integrate failure information from cause, component and system level. It can parameterize the CCF risk significance of possible causes and can update probability distributions of global α-factors. Besides, it can provide a reliable way to evaluate uncertainty sources and reduce the uncertainty in probabilistic risk assessment. It is recommended to build databases including CCF parameters and corresponding causes’ occurrence frequency of each targeted system

  18. Use of multilevel modeling for determining optimal parameters of heat supply systems

    Science.gov (United States)

    Stennikov, V. A.; Barakhtenko, E. A.; Sokolov, D. V.

    2017-07-01

    The problem of finding optimal parameters of a heat-supply system (HSS) is in ensuring the required throughput capacity of a heat network by determining pipeline diameters and characteristics and location of pumping stations. Effective methods for solving this problem, i.e., the method of stepwise optimization based on the concept of dynamic programming and the method of multicircuit optimization, were proposed in the context of the hydraulic circuit theory developed at Melentiev Energy Systems Institute (Sibe