WorldWideScience

Sample records for model parameter adjustments

  1. A metallic solution model with adjustable parameter for describing ternary thermodynamic properties from its binary constituents

    International Nuclear Information System (INIS)

    Fang Zheng; Qiu Guanzhou

    2007-01-01

    A metallic solution model with adjustable parameter k has been developed to predict thermodynamic properties of ternary systems from those of its constituent three binaries. In the present model, the excess Gibbs free energy for a ternary mixture is expressed as a weighted probability sum of those of binaries and the k value is determined based on an assumption that the ternary interaction generally strengthens the mixing effects for metallic solutions with weak interaction, making the Gibbs free energy of mixing of the ternary system more negative than that before considering the interaction. This point is never considered in the models currently reported, where the only difference in a geometrical definition of molar values of components is considered that do not involve thermodynamic principles but are completely empirical. The current model describes the results of experiments very well, and by adjusting the k value also agrees with those from models used widely in the literature. Three ternary systems, Mg-Cu-Ni, Zn-In-Cd, and Cd-Bi-Pb are recalculated to demonstrate the method of determining k and the precision of the model. The results of the calculations, especially those in Mg-Cu-Ni system, are better than those predicted by the current models in the literature

  2. Automatic parameter estimation of multicompartmental neuron models via minimization of trace error with control adjustment.

    Science.gov (United States)

    Brookings, Ted; Goeritz, Marie L; Marder, Eve

    2014-11-01

    We describe a new technique to fit conductance-based neuron models to intracellular voltage traces from isolated biological neurons. The biological neurons are recorded in current-clamp with pink (1/f) noise injected to perturb the activity of the neuron. The new algorithm finds a set of parameters that allows a multicompartmental model neuron to match the recorded voltage trace. Attempting to match a recorded voltage trace directly has a well-known problem: mismatch in the timing of action potentials between biological and model neuron is inevitable and results in poor phenomenological match between the model and data. Our approach avoids this by applying a weak control adjustment to the model to promote alignment during the fitting procedure. This approach is closely related to the control theoretic concept of a Luenberger observer. We tested this approach on synthetic data and on data recorded from an anterior gastric receptor neuron from the stomatogastric ganglion of the crab Cancer borealis. To test the flexibility of this approach, the synthetic data were constructed with conductance models that were different from the ones used in the fitting model. For both synthetic and biological data, the resultant models had good spike-timing accuracy. Copyright © 2014 the American Physiological Society.

  3. ADJUSTMENT OF MORPHOMETRIC PARAMETERS OF WATER BASINS BASED ON DIGITAL TERRAIN MODELS

    Directory of Open Access Journals (Sweden)

    Krasil'nikov Vitaliy Mikhaylovich

    2012-10-01

    Full Text Available The authors argue that effective use of water resources requires accurate morphometric characteristics of water basins. Accurate parameters are needed to analyze their condition, and to assure their appropriate control and operation. Today multiple water basins need their morphometric characteristics to be adjusted and properly stored. The procedure employed so far is based on plane geometric horizontals depicted onto topographic maps. It is described in the procedural guidelines issued in respect of the «Application of water resource regulations governing the operation of waterworks facilities of power plants». The technology described there is obsolete due to the availability of specialized software. The computer technique is based on a digital terrain model. The authors provide an overview of the technique implemented at Rybinsk and Gorkiy water basins in this article. Thus, the digital terrain model generated on the basis of the field data is used at Gorkiy water basin, while the model based on maps and charts is applied at Rybinsk water basin. The authors believe that the software technique can be applied to any other water basin on the basis of the analysis and comparison of morphometric characteristics of the two water basins.

  4. Adjustments of the TaD electron density reconstruction model with GNSS-TEC parameters for operational application purposes

    Directory of Open Access Journals (Sweden)

    Belehaki Anna

    2012-12-01

    Full Text Available Validation results on the latest version of TaD model (TaDv2 show realistic reconstruction of the electron density profiles (EDPs with an average error of 3 TECU, similar to the error obtained from GNSS-TEC calculated paremeters. The work presented here has the aim to further improve the accuracy of the TaD topside reconstruction, adjusting the TEC parameter calculated from TaD model with the TEC parameter calculated by GNSS transmitting RINEX files provided by receivers co-located with the Digisondes. The performance of the new version is tested during a storm period demonstrating further improvements in respect to the previous version. Statistical comparison of modeled and observed TEC confirms the validity of the proposed adjustment. A significant benefit of the proposed upgrade is that it facilitates the real-time implementation of TaD. The model needs a reliable measure of the scale height at the peak height, which is supposed to be provided by Digisondes. Oftenly, the automatic scaling software fails to correctly calculate the scale height at the peak, Hm, due to interferences in the receiving signal. Consequently the model estimated topside scale height is wrongly calculated leading to unrealistic results for the modeled EDP. The proposed TEC adjustment forces the model to correctly reproduce the topside scale height, despite the inaccurate values of Hm. This adjustment is very important for the application of TaD in an operational environment.

  5. Mistral project: identification and parameter adjustment. Theoretical part; Projet Mistral: identification et recalage des modeles. Etude theorique

    Energy Technology Data Exchange (ETDEWEB)

    Faille, D.; Codrons, B.; Gevers, M.

    1996-03-01

    This document belongs to the methodological part of the project MISTRAL, which builds a library of power plant models. The model equations are generally obtained from the first principles. The parameters are actually not always easily calculable (at least accurately) from the dimension data. We are therefore investigating the possibility of automatically adjusting the value of those parameters from experimental data. To do that, we must master the optimization algorithms and the techniques that are analyzing the model structure, like the identifiability theory. (authors). 7 refs., 1 fig., 1 append.

  6. Modeling and simulation of M/M/c queuing pharmacy system with adjustable parameters

    Science.gov (United States)

    Rashida, A. R.; Fadzli, Mohammad; Ibrahim, Safwati; Goh, Siti Rohana

    2016-02-01

    This paper studies a discrete event simulation (DES) as a computer based modelling that imitates a real system of pharmacy unit. M/M/c queuing theo is used to model and analyse the characteristic of queuing system at the pharmacy unit of Hospital Tuanku Fauziah, Kangar in Perlis, Malaysia. The input of this model is based on statistical data collected for 20 working days in June 2014. Currently, patient waiting time of pharmacy unit is more than 15 minutes. The actual operation of the pharmacy unit is a mixed queuing server with M/M/2 queuing model where the pharmacist is referred as the server parameters. DES approach and ProModel simulation software is used to simulate the queuing model and to propose the improvement for queuing system at this pharmacy system. Waiting time for each server is analysed and found out that Counter 3 and 4 has the highest waiting time which is 16.98 and 16.73 minutes. Three scenarios; M/M/3, M/M/4 and M/M/5 are simulated and waiting time for actual queuing model and experimental queuing model are compared. The simulation results show that by adding the server (pharmacist), it will reduce patient waiting time to a reasonable improvement. Almost 50% average patient waiting time is reduced when one pharmacist is added to the counter. However, it is not necessary to fully utilize all counters because eventhough M/M/4 and M/M/5 produced more reduction in patient waiting time, but it is ineffective since Counter 5 is rarely used.

  7. Nuclear Data Parameter Adjustment BNL-INL

    International Nuclear Information System (INIS)

    Palmiotti, G.; Hoblit, S.; Herman, M.; Nobre, G.P.A.; Palumbo, A.; Hiruta, H.; Salvatores, M.

    2013-01-01

    This presentation reports on the consistent adjustment of nuclear data parameters performed within a BNL-INL collaboration. The main advantage compared to the classical adjustment of multigroup constants is to provide final nuclear data constrained by the nuclear reaction theory and consistent with both differential and integral measurements. The feasibility of a single-isotope assimilation was tested on a few priority materials ( 23 Na, 56 Fe, 105 Pd, 235,238 U, 239 Pu) using a selection of clean integral experiments. The multi-isotope assimilation is under study for the Big-3 ( 235,238 U, 239 Pu). This work shows that a consistent assimilation is feasible, but there are pitfalls to avoid (e.g. non-linearity, cross section fluctuations) and prerequisites (e.g. realistic covariances, good prior, realistic weighting of differential and integral experiments). Finally, only all experimental information combined with the state of the art modelling may provide a 'right' answer

  8. Adjustment Criterion and Algorithm in Adjustment Model with Uncertain

    Directory of Open Access Journals (Sweden)

    SONG Yingchun

    2015-02-01

    Full Text Available Uncertainty often exists in the process of obtaining measurement data, which affects the reliability of parameter estimation. This paper establishes a new adjustment model in which uncertainty is incorporated into the function model as a parameter. A new adjustment criterion and its iterative algorithm are given based on uncertainty propagation law in the residual error, in which the maximum possible uncertainty is minimized. This paper also analyzes, with examples, the different adjustment criteria and features of optimal solutions about the least-squares adjustment, the uncertainty adjustment and total least-squares adjustment. Existing error theory is extended with new observational data processing method about uncertainty.

  9. Learning-parameter adjustment in neural networks

    Science.gov (United States)

    Heskes, Tom M.; Kappen, Bert

    1992-06-01

    We present a learning-parameter adjustment algorithm, valid for a large class of learning rules in neural-network literature. The algorithm follows directly from a consideration of the statistics of the weights in the network. The characteristic behavior of the algorithm is calculated, both in a fixed and a changing environment. A simple example, Widrow-Hoff learning for statistical classification, serves as an illustration.

  10. Convexity Adjustments for ATS Models

    DEFF Research Database (Denmark)

    Murgoci, Agatha; Gaspar, Raquel M.

    . As a result we classify convexity adjustments into forward adjustments and swaps adjustments. We, then, focus on affine term structure (ATS) models and, in this context, conjecture convexity adjustments should be related of affine functionals. In the case of forward adjustments, we show how to obtain exact...

  11. Enhancing Global Land Surface Hydrology Estimates from the NASA MERRA Reanalysis Using Precipitation Observations and Model Parameter Adjustments

    Science.gov (United States)

    Reichle, Rolf; Koster, Randal; DeLannoy, Gabrielle; Forman, Barton; Liu, Qing; Mahanama, Sarith; Toure, Ally

    2011-01-01

    The Modern-Era Retrospective analysis for Research and Applications (MERRA) is a state-of-the-art reanalysis that provides. in addition to atmospheric fields. global estimates of soil moisture, latent heat flux. snow. and runoff for J 979-present. This study introduces a supplemental and improved set of land surface hydrological fields ('MERRA-Land') generated by replaying a revised version of the land component of the MERRA system. Specifically. the MERRA-Land estimates benefit from corrections to the precipitation forcing with the Global Precipitation Climatology Project pentad product (version 2.1) and from revised parameters in the rainfall interception model, changes that effectively correct for known limitations in the MERRA land surface meteorological forcings. The skill (defined as the correlation coefficient of the anomaly time series) in land surface hydrological fields from MERRA and MERRA-Land is assessed here against observations and compared to the skill of the state-of-the-art ERA-Interim reanalysis. MERRA-Land and ERA-Interim root zone soil moisture skills (against in situ observations at 85 US stations) are comparable and significantly greater than that of MERRA. Throughout the northern hemisphere, MERRA and MERRA-Land agree reasonably well with in situ snow depth measurements (from 583 stations) and with snow water equivalent from an independent analysis. Runoff skill (against naturalized stream flow observations from 15 basins in the western US) of MERRA and MERRA-Land is typically higher than that of ERA-Interim. With a few exceptions. the MERRA-Land data appear more accurate than the original MERRA estimates and are thus recommended for those interested in using '\\-tERRA output for land surface hydrological studies.

  12. Adaptation of model proteins from cold to hot environments involves continuous and small adjustments of average parameters related to amino acid composition.

    Science.gov (United States)

    De Vendittis, Emmanuele; Castellano, Immacolata; Cotugno, Roberta; Ruocco, Maria Rosaria; Raimo, Gennaro; Masullo, Mariorosario

    2008-01-07

    The growth temperature adaptation of six model proteins has been studied in 42 microorganisms belonging to eubacterial and archaeal kingdoms, covering optimum growth temperatures from 7 to 103 degrees C. The selected proteins include three elongation factors involved in translation, the enzymes glyceraldehyde-3-phosphate dehydrogenase and superoxide dismutase, the cell division protein FtsZ. The common strategy of protein adaptation from cold to hot environments implies the occurrence of small changes in the amino acid composition, without altering the overall structure of the macromolecule. These continuous adjustments were investigated through parameters related to the amino acid composition of each protein. The average value per residue of mass, volume and accessible surface area allowed an evaluation of the usage of bulky residues, whereas the average hydrophobicity reflected that of hydrophobic residues. The specific proportion of bulky and hydrophobic residues in each protein almost linearly increased with the temperature of the host microorganism. This finding agrees with the structural and functional properties exhibited by proteins in differently adapted sources, thus explaining the great compactness or the high flexibility exhibited by (hyper)thermophilic or psychrophilic proteins, respectively. Indeed, heat-adapted proteins incline toward the usage of heavier-size and more hydrophobic residues with respect to mesophiles, whereas the cold-adapted macromolecules show the opposite behavior with a certain preference for smaller-size and less hydrophobic residues. An investigation on the different increase of bulky residues along with the growth temperature observed in the six model proteins suggests the relevance of the possible different role and/or structure organization played by protein domains. The significance of the linear correlations between growth temperature and parameters related to the amino acid composition improved when the analysis was

  13. Extendable linearised adjustment model for deformation analysis

    NARCIS (Netherlands)

    Hiddo Velsink

    2015-01-01

    Author supplied: "This paper gives a linearised adjustment model for the affine, similarity and congruence transformations in 3D that is easily extendable with other parameters to describe deformations. The model considers all coordinates stochastic. Full positive semi-definite covariance matrices

  14. Extendable linearised adjustment model for deformation analysis

    NARCIS (Netherlands)

    Velsink, H.

    2015-01-01

    This paper gives a linearised adjustment model for the affine, similarity and congruence transformations in 3D that is easily extendable with other parameters to describe deformations. The model considers all coordinates stochastic. Full positive semi-definite covariance matrices and correlation

  15. Concurrently adjusting interrelated control parameters to achieve optimal engine performance

    Science.gov (United States)

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-12-01

    Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.

  16. Lumped-parameter models

    Energy Technology Data Exchange (ETDEWEB)

    Ibsen, Lars Bo; Liingaard, M.

    2006-12-15

    A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. In this technical report the steps of establishing a lumped-parameter model are presented. Following sections are included in this report: Static and dynamic formulation, Simple lumped-parameter models and Advanced lumped-parameter models. (au)

  17. Adjustment or updating of models

    Indian Academy of Sciences (India)

    25, Part 3, June 2000, pp. 235±245 ... While the model is defined in terms of these spatial parameters, ... discussed in terms of `model order' with concern focused on whether or not the ..... In other words, it is not easy to justify what the required.

  18. Nuclear data adjustment methodology utilizing resonance parameter sensitivities and uncertainties

    International Nuclear Information System (INIS)

    Broadhead, B.L.

    1983-01-01

    This work presents the development and demonstration of a Nuclear Data Adjustment Method that allows inclusion of both energy and spatial self-shielding into the adjustment procedure. The resulting adjustments are for the basic parameters (i.e. resonance parameters) in the resonance regions and for the group cross sections elsewhere. The majority of this development effort concerns the production of resonance parameter sensitivity information which allows the linkage between the responses of interest and the basic parameters. The resonance parameter sensitivity methodology developed herein usually provides accurate results when compared to direct recalculations using existng and well-known cross section processing codes. However, it has been shown in several cases that self-shielded cross sections can be very non-linear functions of the basic parameters. For this reason caution must be used in any study which assumes that a linear relatonship exists between a given self-shielded group cross section and its corresponding basic data parameters. The study also has pointed out the need for more approximate techniques which will allow the required sensitivity information to be obtained in a more cost effective manner

  19. An iteratively reweighted least-squares approach to adaptive robust adjustment of parameters in linear regression models with autoregressive and t-distributed deviations

    Science.gov (United States)

    Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza

    2018-03-01

    In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.

  20. Adjustment of the dynamic weight distribution as a sensitive parameter for diagnosis of postural alteration in a rodent model of vestibular deficit.

    Directory of Open Access Journals (Sweden)

    Brahim Tighilet

    Full Text Available Vestibular disorders, by inducing significant posturo-locomotor and cognitive disorders, can significantly impair the most basic tasks of everyday life. Their precise diagnosis is essential to implement appropriate therapeutic countermeasures. Monitoring their evolution is also very important to validate or, on the contrary, to adapt the undertaken therapeutic actions. To date, the diagnosis methods of posturo-locomotor impairments are restricted to examinations that most often lack sensitivity and precision. In the present work we studied the alterations of the dynamic weight distribution in a rodent model of sudden and complete unilateral vestibular loss. We used a system of force sensors connected to a data analysis system to quantify in real time and in an automated way the weight bearing of the animal on the ground. We show here that sudden, unilateral, complete and permanent loss of the vestibular inputs causes a severe alteration of the dynamic ground weight distribution of vestibulo lesioned rodents. Characteristics of alterations in the dynamic weight distribution vary over time and follow the sequence of appearance and disappearance of the various symptoms that compose the vestibular syndrome. This study reveals for the first time that dynamic weight bearing is a very sensitive parameter for evaluating posturo-locomotor function impairment. Associated with more classical vestibular examinations, this paradigm can considerably enrich the methods for assessing and monitoring vestibular disorders. Systematic application of this type of evaluation to the dizzy or unstable patient could improve the detection of vestibular deficits and allow predicting better their impact on posture and walk. Thus it could also allow a better follow-up of the therapeutic approaches for rehabilitating gait and balance.

  1. Adjustment of the dynamic weight distribution as a sensitive parameter for diagnosis of postural alteration in a rodent model of vestibular deficit.

    Science.gov (United States)

    Tighilet, Brahim; Péricat, David; Frelat, Alais; Cazals, Yves; Rastoldo, Guillaume; Boyer, Florent; Dumas, Olivier; Chabbert, Christian

    2017-01-01

    Vestibular disorders, by inducing significant posturo-locomotor and cognitive disorders, can significantly impair the most basic tasks of everyday life. Their precise diagnosis is essential to implement appropriate therapeutic countermeasures. Monitoring their evolution is also very important to validate or, on the contrary, to adapt the undertaken therapeutic actions. To date, the diagnosis methods of posturo-locomotor impairments are restricted to examinations that most often lack sensitivity and precision. In the present work we studied the alterations of the dynamic weight distribution in a rodent model of sudden and complete unilateral vestibular loss. We used a system of force sensors connected to a data analysis system to quantify in real time and in an automated way the weight bearing of the animal on the ground. We show here that sudden, unilateral, complete and permanent loss of the vestibular inputs causes a severe alteration of the dynamic ground weight distribution of vestibulo lesioned rodents. Characteristics of alterations in the dynamic weight distribution vary over time and follow the sequence of appearance and disappearance of the various symptoms that compose the vestibular syndrome. This study reveals for the first time that dynamic weight bearing is a very sensitive parameter for evaluating posturo-locomotor function impairment. Associated with more classical vestibular examinations, this paradigm can considerably enrich the methods for assessing and monitoring vestibular disorders. Systematic application of this type of evaluation to the dizzy or unstable patient could improve the detection of vestibular deficits and allow predicting better their impact on posture and walk. Thus it could also allow a better follow-up of the therapeutic approaches for rehabilitating gait and balance.

  2. 40 CFR 91.112 - Requirement of certification-adjustable parameters.

    Science.gov (United States)

    2010-07-01

    ... adjustment in the physically available range. (b) An operating parameter is not considered adjustable if it... adjustable range during certification, production line testing, selective enforcement auditing or any in-use...

  3. Adjusting the Parameters of Metal Oxide Gapless Surge Arresters’ Equivalent Circuits Using the Harmony Search Method

    Directory of Open Access Journals (Sweden)

    Christos A. Christodoulou

    2017-12-01

    Full Text Available The appropriate circuit modeling of metal oxide gapless surge arresters is critical for insulation coordination studies. Metal oxide arresters present a dynamic behavior for fast front surges; namely, their residual voltage is dependent on the peak value, as well as the duration of the injected impulse current, and should therefore not only be represented by non-linear elements. The aim of the current work is to adjust the parameters of the most frequently used surge arresters’ circuit models by considering the magnitude of the residual voltage, as well as the dissipated energy for given pulses. In this aim, the harmony search method is implemented to adjust parameter values of the arrester equivalent circuit models. This functions by minimizing a defined objective function that compares the simulation outcomes with the manufacturer’s data and the results obtained from previous methodologies.

  4. Adjustment model of thermoluminescence experimental data

    International Nuclear Information System (INIS)

    Moreno y Moreno, A.; Moreno B, A.

    2002-01-01

    This model adjusts the experimental results for thermoluminescence according to the equation: I (T) = I (a i * exp (-1/b i * (T-C i )) where: a i , b i , c i are the i-Th peak adjusted to a gaussian curve. The adjustments of the curve can be operated manual or analytically using the macro function and the solver.xla complement installed previously in the computational system. In this work it is shown: 1. The information of experimental data from a LiF curve obtained from the Physics Institute of UNAM which the data adjustment model is operated in the macro type. 2. A LiF curve of four peaks obtained from Harshaw information simulated in Microsoft Excel, discussed in previous works, as a reference not in macro. (Author)

  5. On different types of adjustment usable to calculate the parameters of the stream power law

    Science.gov (United States)

    Demoulin, Alain; Beckers, Arnaud; Bovy, Benoît

    2012-02-01

    Model parameterization through adjustment to field data is a crucial step in the modeling and the understanding of the drainage network response to tectonic or climatic perturbations. Using as a test case a data set of 18 knickpoints that materialize the migration of a 0.7-Ma-old erosion wave in the Ourthe catchment of northern Ardennes (western Europe), we explore the impact of various data fitting on the calibration of the stream power model of river incision, from which a simple knickpoint celerity equation is derived. Our results show that statistical least squares adjustments (or misfit functions) based either on the stream-wise distances between observed and modeled knickpoint positions at time t or on differences between observed and modeled time at the actual knickpoint locations yield significantly different values for the m and K parameters of the model. As there is no physical reason to prefer one of these approaches, an intermediate least-rectangles adjustment might at first glance appear as the best compromise. However, the statistics of the analysis of 200 sets of synthetic knickpoints generated in the Ourthe catchment indicate that the time-based adjustment is the most capable of getting close to the true parameter values. Moreover, this fitting method leads in all cases to an m value lower than that obtained from the classical distance adjustment (for example, 0.75 against 0.86 for the real case of the Ourthe catchment), corresponding to an increase in the non-linear character of the dependence of knickpoint celerity on discharge.

  6. Parameters and error of a theoretical model

    International Nuclear Information System (INIS)

    Moeller, P.; Nix, J.R.; Swiatecki, W.

    1986-09-01

    We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs

  7. Premium adjustment: actuarial analysis on epidemiological models ...

    African Journals Online (AJOL)

    In this paper, we analyse insurance premium adjustment in the context of an epidemiological model where the insurer's future financial liability is greater than the premium from patients. In this situation, it becomes extremely difficult for the insurer since a negative reserve would severely increase its risk of insolvency, ...

  8. Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model

    Science.gov (United States)

    Boone, Spencer

    2017-01-01

    This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.

  9. The pre-processing technique and parameter adjustment influences ...

    African Journals Online (AJOL)

    based BPSO structure for time-varying water temperature modelling. N Hambali, M.N. Taib, I.M. Yassin, M.H.F. Rahiman. Abstract. No Abstract. Keywords: identification; NARX; particle swarm optimization; distillation colum; temperature. Full Text:.

  10. Player Modeling for Intelligent Difficulty Adjustment

    Science.gov (United States)

    Missura, Olana; Gärtner, Thomas

    In this paper we aim at automatically adjusting the difficulty of computer games by clustering players into different types and supervised prediction of the type from short traces of gameplay. An important ingredient of video games is to challenge players by providing them with tasks of appropriate and increasing difficulty. How this difficulty should be chosen and increase over time strongly depends on the ability, experience, perception and learning curve of each individual player. It is a subjective parameter that is very difficult to set. Wrong choices can easily lead to players stopping to play the game as they get bored (if underburdened) or frustrated (if overburdened). An ideal game should be able to adjust its difficulty dynamically governed by the player’s performance. Modern video games utilise a game-testing process to investigate among other factors the perceived difficulty for a multitude of players. In this paper, we investigate how machine learning techniques can be used for automatic difficulty adjustment. Our experiments confirm the potential of machine learning in this application.

  11. OPEC model : adjustment or new model

    International Nuclear Information System (INIS)

    Ayoub, A.

    1994-01-01

    Since the early eighties, the international oil industry went through major changes : new financial markets, reintegration, opening of the upstream, liberalization of investments, privatization. This article provides answers to two major questions : what are the reasons for these changes ? ; do these changes announce the replacement of OPEC model by a new model in which state intervention is weaker and national companies more autonomous. This would imply a profound change of political and institutional systems of oil producing countries. (Author)

  12. PARAMETER ESTIMATION IN BREAD BAKING MODEL

    Directory of Open Access Journals (Sweden)

    Hadiyanto Hadiyanto

    2012-05-01

    Full Text Available Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally product quality parameters. There was a fair agreement between the calibrated model results and the experimental data. The results showed that the applied simple qualitative relationships for quality performed above expectation. Furthermore, it was confirmed that the microwave input is most meaningful for the internal product properties and not for the surface properties as crispness and color. The model with adjusted parameters was applied in a quality driven food process design procedure to derive a dynamic operation pattern, which was subsequently tested experimentally to calibrate the model. Despite the limited calibration with fixed operation settings, the model predicted well on the behavior under dynamic convective operation and on combined convective and microwave operation. It was expected that the suitability between model and baking system could be improved further by performing calibration experiments at higher temperature and various microwave power levels.  Abstrak  PERKIRAAN PARAMETER DALAM MODEL UNTUK PROSES BAKING ROTI. Kualitas produk roti sangat tergantung pada proses baking yang digunakan. Suatu model yang telah dikembangkan dengan metode kualitatif dan kuantitaif telah dikalibrasi dengan percobaan pada temperatur 200oC dan dengan kombinasi dengan mikrowave pada 100 Watt. Parameter-parameter model diestimasi dengan prosedur bertahap yaitu pertama, parameter pada model perpindahan masa dan panas, parameter pada model transformasi, dan

  13. Set up of a method for the adjustment of resonance parameters on integral experiments

    International Nuclear Information System (INIS)

    Blaise, P.

    1996-01-01

    Resonance parameters for actinides play a significant role in the neutronic characteristics of all reactor types. All the major integral parameters strongly depend on the nuclear data of the isotopes in the resonance-energy regions.The author sets up a method for the adjustment of resonance parameters taking into account the self-shielding effects and restricting the cross section deconvolution problem to a limited energy region. (N.T.)

  14. Photovoltaic module parameters acquisition model

    Energy Technology Data Exchange (ETDEWEB)

    Cibira, Gabriel, E-mail: cibira@lm.uniza.sk; Koščová, Marcela, E-mail: mkoscova@lm.uniza.sk

    2014-09-01

    Highlights: • Photovoltaic five-parameter model is proposed using Matlab{sup ®} and Simulink. • The model acquisits input sparse data matrix from stigmatic measurement. • Computer simulations lead to continuous I–V and P–V characteristics. • Extrapolated I–V and P–V characteristics are in hand. • The model allows us to predict photovoltaics exploitation in different conditions. - Abstract: This paper presents basic procedures for photovoltaic (PV) module parameters acquisition using MATLAB and Simulink modelling. In first step, MATLAB and Simulink theoretical model are set to calculate I–V and P–V characteristics for PV module based on equivalent electrical circuit. Then, limited I–V data string is obtained from examined PV module using standard measurement equipment at standard irradiation and temperature conditions and stated into MATLAB data matrix as a reference model. Next, the theoretical model is optimized to keep-up with the reference model and to learn its basic parameters relations, over sparse data matrix. Finally, PV module parameters are deliverable for acquisition at different realistic irradiation, temperature conditions as well as series resistance. Besides of output power characteristics and efficiency calculation for PV module or system, proposed model validates computing statistical deviation compared to reference model.

  15. Photovoltaic module parameters acquisition model

    International Nuclear Information System (INIS)

    Cibira, Gabriel; Koščová, Marcela

    2014-01-01

    Highlights: • Photovoltaic five-parameter model is proposed using Matlab ® and Simulink. • The model acquisits input sparse data matrix from stigmatic measurement. • Computer simulations lead to continuous I–V and P–V characteristics. • Extrapolated I–V and P–V characteristics are in hand. • The model allows us to predict photovoltaics exploitation in different conditions. - Abstract: This paper presents basic procedures for photovoltaic (PV) module parameters acquisition using MATLAB and Simulink modelling. In first step, MATLAB and Simulink theoretical model are set to calculate I–V and P–V characteristics for PV module based on equivalent electrical circuit. Then, limited I–V data string is obtained from examined PV module using standard measurement equipment at standard irradiation and temperature conditions and stated into MATLAB data matrix as a reference model. Next, the theoretical model is optimized to keep-up with the reference model and to learn its basic parameters relations, over sparse data matrix. Finally, PV module parameters are deliverable for acquisition at different realistic irradiation, temperature conditions as well as series resistance. Besides of output power characteristics and efficiency calculation for PV module or system, proposed model validates computing statistical deviation compared to reference model

  16. Nonlinear predictive control for adaptive adjustments of deep brain stimulation parameters in basal ganglia-thalamic network.

    Science.gov (United States)

    Su, Fei; Wang, Jiang; Niu, Shuangxia; Li, Huiyan; Deng, Bin; Liu, Chen; Wei, Xile

    2018-02-01

    The efficacy of deep brain stimulation (DBS) for Parkinson's disease (PD) depends in part on the post-operative programming of stimulation parameters. Closed-loop stimulation is one method to realize the frequent adjustment of stimulation parameters. This paper introduced the nonlinear predictive control method into the online adjustment of DBS amplitude and frequency. This approach was tested in a computational model of basal ganglia-thalamic network. The autoregressive Volterra model was used to identify the process model based on physiological data. Simulation results illustrated the efficiency of closed-loop stimulation methods (amplitude adjustment and frequency adjustment) in improving the relay reliability of thalamic neurons compared with the PD state. Besides, compared with the 130Hz constant DBS the closed-loop stimulation methods can significantly reduce the energy consumption. Through the analysis of inter-spike-intervals (ISIs) distribution of basal ganglia neurons, the evoked network activity by the closed-loop frequency adjustment stimulation was closer to the normal state. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Intelligent Adjustment of Printhead Driving Waveform Parameters for 3D Electronic Printing

    Directory of Open Access Journals (Sweden)

    Lin Na

    2017-01-01

    Full Text Available In practical applications of 3D electronic printing, a major challenge is to adjust the printhead for a high print resolution and accuracy. However, an exhausting manual selective process inevitably wastes a lot of time. Therefore, in this paper, we proposed a new intelligent adjustment method, which adopts artificial bee colony algorithm to optimize the printhead driving waveform parameters for getting the desired printhead state. Experimental results show that this method can quickly and accuracy find out the suitable combination of driving waveform parameters to meet the needs of applications.

  18. Examining the Correlation between Objective Injury Parameters, Personality Traits and Adjustment Measures among Burn Victims

    Directory of Open Access Journals (Sweden)

    Josef Mordechai Haik

    2015-03-01

    Full Text Available Background: Burn victims experience immense physical and mental hardship during their process of rehabilitation and regaining functionality. We examined different objective burn related factors as well as psychological ones, in the form of personality traits, that may affect the rehabilitation process and its outcome. Objective: To assess the influence and correlation of specific personality traits and objective injury related parameters on the adjustment of burn victims post-injury. Methods: 62 male patients admitted to our burn unit due to burn injuries were compared with 36 healthy male individuals by use of questionnaires to assess each group's psychological adjustment parameters. Multivariate and hierarchical regression analysis was conducted to identify differences between the groups. Results: A significant negative correlation was found between the objective burn injury severity (e.g. TBSA and burn depth and the adjustment of burn victims (p<0.05, p<0.001, table 3. Moreover, patients more severely injured tend to be more neurotic (p<0.001, and less extroverted and agreeable (p<0.01, table 4. Conclusions: Extroverted burn victims tend to adjust better to their post-injury life while the neurotic patients tend to have difficulties adjusting. This finding may suggest new tools for early identification of maladjustment-prone patients and therefore provide them with better psychological support in a more dedicated manner.

  19. Suppression of threshold voltage variability in MOSFETs by adjustment of ion implantation parameters

    Science.gov (United States)

    Park, Jae Hyun; Chang, Tae-sig; Kim, Minsuk; Woo, Sola; Kim, Sangsig

    2018-01-01

    In this study, we investigate threshold voltage (VTH) variability of metal-oxide-semiconductor field-effect transistors induced by random dopant fluctuation (RDF). Our simulation work demonstrates not only the influence of the implantation parameters such as its dose, tilt angle, energy, and rotation angle on the RDF-induced VTH variability, but also the solution to reduce the effect of this variability. By adjusting the ion implantation parameters, the 3σ (VTH) is reduced from 43.8 mV to 28.9 mV. This 34% reduction is significant, considering that our technique is very cost effective and facilitates easy fabrication, increasing availability.

  20. An efficient method to generate a perturbed parameter ensemble of a fully coupled AOGCM without flux-adjustment

    Directory of Open Access Journals (Sweden)

    P. J. Irvine

    2013-09-01

    Full Text Available We present a simple method to generate a perturbed parameter ensemble (PPE of a fully-coupled atmosphere-ocean general circulation model (AOGCM, HadCM3, without requiring flux-adjustment. The aim was to produce an ensemble that samples parametric uncertainty in some key variables and gives a plausible representation of the climate. Six atmospheric parameters, a sea-ice parameter and an ocean parameter were jointly perturbed within a reasonable range to generate an initial group of 200 members. To screen out implausible ensemble members, 20 yr pre-industrial control simulations were run and members whose temperature responses to the parameter perturbations were projected to be outside the range of 13.6 ± 2 °C, i.e. near to the observed pre-industrial global mean, were discarded. Twenty-one members, including the standard unperturbed model, were accepted, covering almost the entire span of the eight parameters, challenging the argument that without flux-adjustment parameter ranges would be unduly restricted. This ensemble was used in 2 experiments; an 800 yr pre-industrial and a 150 yr quadrupled CO2 simulation. The behaviour of the PPE for the pre-industrial control compared well to ERA-40 reanalysis data and the CMIP3 ensemble for a number of surface and atmospheric column variables with the exception of a few members in the Tropics. However, we find that members of the PPE with low values of the entrainment rate coefficient show very large increases in upper tropospheric and stratospheric water vapour concentrations in response to elevated CO2 and one member showed an implausible nonlinear climate response, and as such will be excluded from future experiments with this ensemble. The outcome of this study is a PPE of a fully-coupled AOGCM which samples parametric uncertainty and a simple methodology which would be applicable to other GCMs.

  1. Adjusted Empirical Likelihood Method in the Presence of Nuisance Parameters with Application to the Sharpe Ratio

    Directory of Open Access Journals (Sweden)

    Yuejiao Fu

    2018-04-01

    Full Text Available The Sharpe ratio is a widely used risk-adjusted performance measurement in economics and finance. Most of the known statistical inferential methods devoted to the Sharpe ratio are based on the assumption that the data are normally distributed. In this article, without making any distributional assumption on the data, we develop the adjusted empirical likelihood method to obtain inference for a parameter of interest in the presence of nuisance parameters. We show that the log adjusted empirical likelihood ratio statistic is asymptotically distributed as the chi-square distribution. The proposed method is applied to obtain inference for the Sharpe ratio. Simulation results illustrate that the proposed method is comparable to Jobson and Korkie’s method (1981 and outperforms the empirical likelihood method when the data are from a symmetric distribution. In addition, when the data are from a skewed distribution, the proposed method significantly outperforms all other existing methods. A real-data example is analyzed to exemplify the application of the proposed method.

  2. Cognitive Models of Risky Choice: Parameter Stability and Predictive Accuracy of Prospect Theory

    Science.gov (United States)

    Glockner, Andreas; Pachur, Thorsten

    2012-01-01

    In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are…

  3. Parameters-adjustable front-end controller in digital nuclear measurement system

    International Nuclear Information System (INIS)

    Hao Dejian; Zhang Ruanyu; Yan Yangyang; Wang Peng; Tang Changjian

    2013-01-01

    Background: One digitizer is used to implement a digital nuclear measurement for the acquisition of nuclear information. Purpose: A principle and method of a parameter-adjustable front-end controller is presented for the sake of reducing the quantitative errors while getting the maximum ENOB (effective number of bits) of ADC (analog-to-digital converter) during waveform digitizing, as well as reducing the losing counts. Methods: First of all, the quantitative relationship among the radiation count rate (n), the amplitude of input signal (V in ), the conversion scale of ADC (±V) and the amplification factor (A) was derived. Secondly, the hardware and software of the front-end controller were designed to fulfill matching the output of different detectors, adjusting the amplification linearly through the control of channel switching, and setting of digital potentiometer by CPLD (Complex Programmable Logic Device). Results: (1) Through the measurement of γ-ray of Am-241 under our digital nuclear measurement set-up with CZT detector, it was validated that the amplitude of output signal of detectors of RC feedback type could be amplified linearly with adjustable amplification by the front-end controller. (2) Through the measurement of X-ray spectrum of Fe-5.5 under our digital nuclear measurement set-up with Si-PIN detector, it was validated that the front-end controller was suitable for the switch resetting type detectors, by which high precision measurement under various count rates could be fulfilled. Conclusion: The principle and method of the parameter-adjustable front-end controller presented in this paper is correct and feasible. (authors)

  4. Methodological aspects of journaling a dynamic adjusting entry model

    Directory of Open Access Journals (Sweden)

    Vlasta Kašparovská

    2011-01-01

    Full Text Available This paper expands the discussion of the importance and function of adjusting entries for loan receivables. Discussion of the cyclical development of adjusting entries, their negative impact on the business cycle and potential solutions has intensified during the financial crisis. These discussions are still ongoing and continue to be relevant to members of the professional public, banking regulators and representatives of international accounting institutions. The objective of this paper is to evaluate a method of journaling dynamic adjusting entries under current accounting law. It also expresses the authors’ opinions on the potential for consistently implementing basic accounting principles in journaling adjusting entries for loan receivables under a dynamic model.

  5. Adjustable Parameter-Based Distributed Fault Estimation Observer Design for Multiagent Systems With Directed Graphs.

    Science.gov (United States)

    Zhang, Ke; Jiang, Bin; Shi, Peng

    2017-02-01

    In this paper, a novel adjustable parameter (AP)-based distributed fault estimation observer (DFEO) is proposed for multiagent systems (MASs) with the directed communication topology. First, a relative output estimation error is defined based on the communication topology of MASs. Then a DFEO with AP is constructed with the purpose of improving the accuracy of fault estimation. Based on H ∞ and H 2 with pole placement, multiconstrained design is given to calculate the gain of DFEO. Finally, simulation results are presented to illustrate the feasibility and effectiveness of the proposed DFEO design with AP.

  6. Evaluation of the impact of adjusting the angle of the axis of a wind turbine rotor relative to the flow of air stream on operating parameters of a wind turbine model

    Directory of Open Access Journals (Sweden)

    Gumuła Stanisław

    2017-01-01

    Full Text Available The aim of this study was to determine the effect of regulation of an axis of a wind turbine rotor to the direction of wind on the volume of energy produced by wind turbines. A role of an optimal setting of the blades of the wind turbine rotor was specified, as well. According to the measurements, changes in the tilt angle of the axis of the wind turbine rotor in relation to the air stream flow direction cause changes in the use of wind energy. The publication explores the effects of the operating conditions of wind turbines on the possibility of using wind energy. A range of factors affect the operation of the wind turbine, and thus the volume of energy produced by the plant. The impact of design parameters of wind power plant, climatic factors or associated with the location seismic challenges can be shown from among them. One of the parameters has proved to be change settings of the rotor axis in relation to direction of flow of the air stream. Studies have shown that the accurate determination of the optimum angle of the axis of the rotor with respect to flow of air stream strongly influences the characteristics of the wind turbine.

  7. Towards a Collision-Free WLAN: Dynamic Parameter Adjustment in CSMA/E2CA

    Directory of Open Access Journals (Sweden)

    Bellalta Boris

    2011-01-01

    Full Text Available Carrier sense multiple access with enhanced collision avoidance (CSMA/ECA is a distributed MAC protocol that allows collision-free access to the medium in WLANs. The only difference between CSMA/ECA and the well-known CSMA/CA is that the former uses a deterministic backoff after successful transmissions. Collision-free operation is reached after a transient state during which some collisions may occur. This paper shows that the duration of the transient state can be shortened by appropriately setting the contention parameters. Standard absorbing Markov chain theory is used to describe the behaviour of the system in the transient state and to predict the expected number of slots to reach the collision-free operation. The paper also introduces CSMA/E2CA, in which a deterministic backoff is used two consecutive times after a successful transmission. CSMA/E2CA converges quicker to collision-free operation and delivers higher performance than CSMA/ECA, specially in harsh wireless scenarios with high frame-error rates. The last part of the paper addresses scenarios with a large number of contenders. We suggest dynamic parameter adjustment techniques to accommodate a varying (and potentially high number of contenders. The effectiveness of these adjustments in preventing collisions is validated by means of simulation.

  8. Five adjustable parameter fit of quark and lepton masses and mixings

    International Nuclear Information System (INIS)

    Nielsen, H.B.; Takanishi, Y.

    2002-05-01

    We develop a model of ours fitting the quark and lepton masses and mixing angles by removing from the model a Higgs field previously introduced to organise a large atmospheric mixing angle for neutrino oscillations. Due to the off-diagonal elements dominating in the see-saw neutrino mass matrix the large atmospheric mixing angle comes essentially by itself. It turns out that we have now only five adjustable Higgs field vacuum expectation values needed to fit all the masses and mixings order of magnitudewise taking into account the renormalisation group runnings in all sectors. The CHOOZ angle comes out close to the experimental bound. (orig.)

  9. Reflector modelization for neutronic diffusion and parameters identification

    International Nuclear Information System (INIS)

    Argaud, J.P.

    1993-04-01

    Physical parameters of neutronic diffusion equations can be adjusted to decrease calculations-measurements errors. The reflector being always difficult to modelize, we choose to elaborate a new reflector model and to use the parameters of this model as adjustment coefficients in the identification procedure. Using theoretical results, and also the physical behaviour of neutronic flux solutions, the reflector model consists then in its replacement by boundary conditions for the diffusion equations on the core only. This theoretical result of non-local operator relations leads then to some discrete approximations by taking into account the multiscaled behaviour, on the core-reflector interface, of neutronic diffusion solutions. The resulting model of this approach is then compared with previous reflector modelizations, and first results indicate that this new model gives the same representation of reflector for the core than previous. (author). 12 refs

  10. Determination of parameters in elasto-plastic models of aluminium.

    NARCIS (Netherlands)

    Meuwissen, M.H.H.; Oomens, C.W.J.; Baaijens, F.P.T.; Petterson, R.; Janssen, J.D.; Sol, H.; Oomens, C.W.J.

    1997-01-01

    A mixed numerical-experimental method is used to determine parameters in elasto-plastic constitutive models. An aluminium plate of non-standard geometry is mounted in a uniaxial tensile testing machine at which some adjustments are made to carry out shear tests. The sample is loaded and the total

  11. Storm Water Management Model Climate Adjustment Tool (SWMM-CAT)

    Science.gov (United States)

    The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations. SWMM, first released in 1971, models hydrology and hydrauli...

  12. R.M. Solow Adjusted Model of Economic Growth

    Directory of Open Access Journals (Sweden)

    Ion Gh. Rosca

    2007-05-01

    Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans etc., the R.M. Solow model is part of the category which characterizes the economic growth. The paper proposes the study of the R.M. Solow adjusted model of economic growth, while the adjustment consisting in the model adaptation to the Romanian economic characteristics. The article is the first one from a three paper series dedicated to the macroeconomic modelling theme, using the R.M. Solow model, such as: “Measurement of the economic growth and extensions of the R.M. Solow adjusted model” and “Evolution scenarios at the Romanian economy level using the R.M. Solow adjusted model”. The analysis part of the model is based on the study of the equilibrium to the continuous case with some interpretations of the discreet one, by using the state diagram. The optimization problem at the economic level is also used; it is built up of a specified number of representative consumers and firms in order to reveal the interaction between these elements.

  13. Parenting Stress, Mental Health, Dyadic Adjustment: A Structural Equation Model

    Directory of Open Access Journals (Sweden)

    Luca Rollè

    2017-05-01

    Full Text Available Objective: In the 1st year of the post-partum period, parenting stress, mental health, and dyadic adjustment are important for the wellbeing of both parents and the child. However, there are few studies that analyze the relationship among these three dimensions. The aim of this study is to investigate the relationships between parenting stress, mental health (depressive and anxiety symptoms, and dyadic adjustment among first-time parents.Method: We studied 268 parents (134 couples of healthy babies. At 12 months post-partum, both parents filled out, in a counterbalanced order, the Parenting Stress Index-Short Form, the Edinburgh Post-natal Depression Scale, the State-Trait Anxiety Inventory, and the Dyadic Adjustment Scale. Structural equation modeling was used to analyze the potential mediating effects of mental health on the relationship between parenting stress and dyadic adjustment.Results: Results showed the full mediation effect of mental health between parenting stress and dyadic adjustment. A multi-group analysis further found that the paths did not differ across mothers and fathers.Discussion: The results suggest that mental health is an important dimension that mediates the relationship between parenting stress and dyadic adjustment in the transition to parenthood.

  14. Parameter Estimation of Partial Differential Equation Models.

    Science.gov (United States)

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab

    2013-01-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.

  15. Quality assessment for radiological model parameters

    International Nuclear Information System (INIS)

    Funtowicz, S.O.

    1989-01-01

    A prototype framework for representing uncertainties in radiological model parameters is introduced. This follows earlier development in this journal of a corresponding framework for representing uncertainties in radiological data. Refinements and extensions to the earlier framework are needed in order to take account of the additional contextual factors consequent on using data entries to quantify model parameters. The parameter coding can in turn feed in to methods for evaluating uncertainties in calculated model outputs. (author)

  16. Luminescence model with quantum impact parameter for low energy ions

    CERN Document Server

    Cruz-Galindo, H S; Martínez-Davalos, A; Belmont-Moreno, E; Galindo, S

    2002-01-01

    We have modified an analytical model of induced light production by energetic ions interacting in scintillating materials. The original model is based on the distribution of energy deposited by secondary electrons produced along the ion's track. The range of scattered electrons, and thus the energy distribution, depends on a classical impact parameter between the electron and the ion's track. The only adjustable parameter of the model is the quenching density rho sub q. The modification here presented, consists in proposing a quantum impact parameter that leads to a better fit of the model to the experimental data at low incident ion energies. The light output response of CsI(Tl) detectors to low energy ions (<3 MeV/A) is fitted with the modified model and comparison is made to the original model.

  17. Establishing statistical models of manufacturing parameters

    International Nuclear Information System (INIS)

    Senevat, J.; Pape, J.L.; Deshayes, J.F.

    1991-01-01

    This paper reports on the effect of pilgering and cold-work parameters on contractile strain ratio and mechanical properties that were investigated using a large population of Zircaloy tubes. Statistical models were established between: contractile strain ratio and tooling parameters, mechanical properties (tensile test, creep test) and cold-work parameters, and mechanical properties and stress-relieving temperature

  18. Robust estimation of hydrological model parameters

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-11-01

    Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.

  19. Model parameter updating using Bayesian networks

    International Nuclear Information System (INIS)

    Treml, C.A.; Ross, Timothy J.

    2004-01-01

    This paper outlines a model parameter updating technique for a new method of model validation using a modified model reference adaptive control (MRAC) framework with Bayesian Networks (BNs). The model parameter updating within this method is generic in the sense that the model/simulation to be validated is treated as a black box. It must have updateable parameters to which its outputs are sensitive, and those outputs must have metrics that can be compared to that of the model reference, i.e., experimental data. Furthermore, no assumptions are made about the statistics of the model parameter uncertainty, only upper and lower bounds need to be specified. This method is designed for situations where a model is not intended to predict a complete point-by-point time domain description of the item/system behavior; rather, there are specific points, features, or events of interest that need to be predicted. These specific points are compared to the model reference derived from actual experimental data. The logic for updating the model parameters to match the model reference is formed via a BN. The nodes of this BN consist of updateable model input parameters and the specific output values or features of interest. Each time the model is executed, the input/output pairs are used to adapt the conditional probabilities of the BN. Each iteration further refines the inferred model parameters to produce the desired model output. After parameter updating is complete and model inputs are inferred, reliabilities for the model output are supplied. Finally, this method is applied to a simulation of a resonance control cooling system for a prototype coupled cavity linac. The results are compared to experimental data.

  20. On parameter estimation in deformable models

    DEFF Research Database (Denmark)

    Fisker, Rune; Carstensen, Jens Michael

    1998-01-01

    Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian form...

  1. Parameter identification in multinomial processing tree models

    NARCIS (Netherlands)

    Schmittmann, V.D.; Dolan, C.V.; Raijmakers, M.E.J.; Batchelder, W.H.

    2010-01-01

    Multinomial processing tree models form a popular class of statistical models for categorical data that have applications in various areas of psychological research. As in all statistical models, establishing which parameters are identified is necessary for model inference and selection on the basis

  2. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei

    2013-09-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  3. Modeling wind adjustment factor and midflame wind speed for Rothermel's surface fire spread model

    Science.gov (United States)

    Patricia L. Andrews

    2012-01-01

    Rothermel's surface fire spread model was developed to use a value for the wind speed that affects surface fire, called midflame wind speed. Models have been developed to adjust 20-ft wind speed to midflame wind speed for sheltered and unsheltered surface fuel. In this report, Wind Adjustment Factor (WAF) model equations are given, and the BehavePlus fire modeling...

  4. Parameter identification in a nonlinear nuclear reactor model using quasilinearization

    International Nuclear Information System (INIS)

    Barreto, J.M.; Martins Neto, A.F.; Tanomaru, N.

    1980-09-01

    Parameter identification in a nonlinear, lumped parameter, nuclear reactor model is carried out using discrete output power measurements during the transient caused by an external reactivity change. In order to minimize the difference between the model and the reactor power responses, the parameter promt neutron generation time and a parameter in fuel temperature reactivity coefficient equation are adjusted using quasilinearization. The influences of the external reactivity disturbance, the number and frequency of measurements and the measurement noise level on the method accuracy and rate of convergence are analysed through simulation. Procedures for the design of the identification experiments are suggested. The method proved to be very effective for low level noise measurements. (Author) [pt

  5. Parameter identification in the logistic STAR model

    DEFF Research Database (Denmark)

    Ekner, Line Elvstrøm; Nejstgaard, Emil

    We propose a new and simple parametrization of the so-called speed of transition parameter of the logistic smooth transition autoregressive (LSTAR) model. The new parametrization highlights that a consequence of the well-known identification problem of the speed of transition parameter is that th...

  6. On adjustment for auxiliary covariates in additive hazard models for the analysis of randomized experiments

    DEFF Research Database (Denmark)

    Vansteelandt, S.; Martinussen, Torben; Tchetgen, E. J Tchetgen

    2014-01-01

    We consider additive hazard models (Aalen, 1989) for the effect of a randomized treatment on a survival outcome, adjusting for auxiliary baseline covariates. We demonstrate that the Aalen least-squares estimator of the treatment effect parameter is asymptotically unbiased, even when the hazard...... that, in view of its robustness against model misspecification, Aalen least-squares estimation is attractive for evaluating treatment effects on a survival outcome in randomized experiments, and the primary reasons to consider baseline covariate adjustment in such settings could be interest in subgroup......'s dependence on time or on the auxiliary covariates is misspecified, and even away from the null hypothesis of no treatment effect. We furthermore show that adjustment for auxiliary baseline covariates does not change the asymptotic variance of the estimator of the effect of a randomized treatment. We conclude...

  7. Exploiting intrinsic fluctuations to identify model parameters.

    Science.gov (United States)

    Zimmer, Christoph; Sahle, Sven; Pahle, Jürgen

    2015-04-01

    Parameterisation of kinetic models plays a central role in computational systems biology. Besides the lack of experimental data of high enough quality, some of the biggest challenges here are identification issues. Model parameters can be structurally non-identifiable because of functional relationships. Noise in measured data is usually considered to be a nuisance for parameter estimation. However, it turns out that intrinsic fluctuations in particle numbers can make parameters identifiable that were previously non-identifiable. The authors present a method to identify model parameters that are structurally non-identifiable in a deterministic framework. The method takes time course recordings of biochemical systems in steady state or transient state as input. Often a functional relationship between parameters presents itself by a one-dimensional manifold in parameter space containing parameter sets of optimal goodness. Although the system's behaviour cannot be distinguished on this manifold in a deterministic framework it might be distinguishable in a stochastic modelling framework. Their method exploits this by using an objective function that includes a measure for fluctuations in particle numbers. They show on three example models, immigration-death, gene expression and Epo-EpoReceptor interaction, that this resolves the non-identifiability even in the case of measurement noise with known amplitude. The method is applied to partially observed recordings of biochemical systems with measurement noise. It is simple to implement and it is usually very fast to compute. This optimisation can be realised in a classical or Bayesian fashion.

  8. Setting Parameters for Biological Models With ANIMO

    NARCIS (Netherlands)

    Schivo, Stefano; Scholma, Jetse; Karperien, Hermanus Bernardus Johannes; Post, Janine Nicole; van de Pol, Jan Cornelis; Langerak, Romanus; André, Étienne; Frehse, Goran

    2014-01-01

    ANIMO (Analysis of Networks with Interactive MOdeling) is a software for modeling biological networks, such as e.g. signaling, metabolic or gene networks. An ANIMO model is essentially the sum of a network topology and a number of interaction parameters. The topology describes the interactions

  9. Parameter Estimation of Nonlinear Models in Forestry.

    OpenAIRE

    Fekedulegn, Desta; Mac Siúrtáin, Máirtín Pádraig; Colbert, Jim J.

    1999-01-01

    Partial derivatives of the negative exponential, monomolecular, Mitcherlich, Gompertz, logistic, Chapman-Richards, von Bertalanffy, Weibull and the Richard’s nonlinear growth models are presented. The application of these partial derivatives in estimating the model parameters is illustrated. The parameters are estimated using the Marquardt iterative method of nonlinear regression relating top height to age of Norway spruce (Picea abies L.) from the Bowmont Norway Spruce Thinnin...

  10. Asymmetric adjustment

    NARCIS (Netherlands)

    2010-01-01

    A method of adjusting a signal processing parameter for a first hearing aid and a second hearing aid forming parts of a binaural hearing aid system to be worn by a user is provided. The binaural hearing aid system comprises a user specific model representing a desired asymmetry between a first ear

  11. Wind Farm Decentralized Dynamic Modeling With Parameters

    DEFF Research Database (Denmark)

    Soltani, Mohsen; Shakeri, Sayyed Mojtaba; Grunnet, Jacob Deleuran

    2010-01-01

    Development of dynamic wind flow models for wind farms is part of the research in European research FP7 project AEOLUS. The objective of this report is to provide decentralized dynamic wind flow models with parameters. The report presents a structure for decentralized flow models with inputs from...... local models. The results of this report are especially useful, but not limited, to design a decentralized wind farm controller, since in centralized controller design one can also use the model and update it in a central computing node.......Development of dynamic wind flow models for wind farms is part of the research in European research FP7 project AEOLUS. The objective of this report is to provide decentralized dynamic wind flow models with parameters. The report presents a structure for decentralized flow models with inputs from...

  12. Parameter Estimation for Thurstone Choice Models

    Energy Technology Data Exchange (ETDEWEB)

    Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-24

    We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.

  13. The level density parameters for fermi gas model

    International Nuclear Information System (INIS)

    Zuang Youxiang; Wang Cuilan; Zhou Chunmei; Su Zongdi

    1986-01-01

    Nuclear level densities are crucial ingredient in the statistical models, for instance, in the calculations of the widths, cross sections, emitted particle spectra, etc. for various reaction channels. In this work 667 sets of more reliable and new experimental data are adopted, which include average level spacing D D , radiative capture width Γ γ 0 at neutron binding energy and cumulative level number N 0 at the low excitation energy. They are published during 1973 to 1983. Based on the parameters given by Gilbert-Cameon and Cook the physical quantities mentioned above are calculated. The calculated results have the deviation obviously from experimental values. In order to improve the fitting, the parameters in the G-C formula are adjusted and new set of level density parameters is obsained. The parameters is this work are more suitable to fit new measurements

  14. Systematic parameter inference in stochastic mesoscopic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Lei, Huan; Yang, Xiu [Pacific Northwest National Laboratory, Richland, WA 99352 (United States); Li, Zhen [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States); Karniadakis, George Em, E-mail: george_karniadakis@brown.edu [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States)

    2017-02-01

    We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are “sparse”. The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.

  15. Application of lumped-parameter models

    DEFF Research Database (Denmark)

    Ibsen, Lars Bo; Liingaard, Morten

    This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil (section 1.1). Subse...

  16. Models and parameters for environmental radiological assessments

    International Nuclear Information System (INIS)

    Miller, C.W.

    1984-01-01

    This book presents a unified compilation of models and parameters appropriate for assessing the impact of radioactive discharges to the environment. Models examined include those developed for the prediction of atmospheric and hydrologic transport and deposition, for terrestrial and aquatic food-chain bioaccumulation, and for internal and external dosimetry. Chapters have been entered separately into the data base

  17. WINKLER'S SINGLE-PARAMETER SUBGRADE MODEL FROM ...

    African Journals Online (AJOL)

    Preferred Customer

    Page 1 ... corresponding single-parameter Winkler model presented in this work. Keywords: Heterogeneous subgrade, Reissner's simplified continuum, Shear interaction, Simplified continuum, Winkler ... model in practical applications and its long time familiarity among practical engineers, its usage has endured to this date ...

  18. Models and parameters for environmental radiological assessments

    Energy Technology Data Exchange (ETDEWEB)

    Miller, C W [ed.

    1984-01-01

    This book presents a unified compilation of models and parameters appropriate for assessing the impact of radioactive discharges to the environment. Models examined include those developed for the prediction of atmospheric and hydrologic transport and deposition, for terrestrial and aquatic food-chain bioaccumulation, and for internal and external dosimetry. Chapters have been entered separately into the data base. (ACR)

  19. Consistent Stochastic Modelling of Meteocean Design Parameters

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Sterndorff, M. J.

    2000-01-01

    Consistent stochastic models of metocean design parameters and their directional dependencies are essential for reliability assessment of offshore structures. In this paper a stochastic model for the annual maximum values of the significant wave height, and the associated wind velocity, current...

  20. Models and parameters for environmental radiological assessments

    International Nuclear Information System (INIS)

    Miller, C.W.

    1983-01-01

    This article reviews the forthcoming book Models and Parameters for Environmental Radiological Assessments, which presents a unified compilation of models and parameters for assessing the impact on man of radioactive discharges, both routine and accidental, into the environment. Models presented in this book include those developed for the prediction of atmospheric and hydrologic transport and deposition, for terrestrial and aquatic food-chain bioaccumulation, and for internal and external dosimetry. Summaries are presented for each of the transport and dosimetry areas previously for each of the transport and dosimetry areas previously mentioned, and details are available in the literature cited. A chapter of example problems illustrates many of the methodologies presented throughout the text. Models and parameters presented are based on the results of extensive literature reviews and evaluations performed primarily by the staff of the Health and Safety Research Division of Oak Ridge National Laboratory

  1. Capital Structure: Target Adjustment Model and a Mediation Moderation Model with Capital Structure as Mediator

    OpenAIRE

    Abedmajid, Mohammed

    2015-01-01

    This study consists of two models. Model one is conducted to check if there is a target adjustment toward optimal capital structure, in the context of Turkish firm listed on the stock market, over the period 2003-2014. Model 2 captures the interaction between firm size, profitability, market value and capital structure using the moderation mediation model. The results of model 1 have shown that there is a partial adjustment of the capital structure to reach target levels. The results of...

  2. The mobilisation model and parameter sensitivity

    International Nuclear Information System (INIS)

    Blok, B.M.

    1993-12-01

    In the PRObabillistic Safety Assessment (PROSA) of radioactive waste in a salt repository one of the nuclide release scenario's is the subrosion scenario. A new subrosion model SUBRECN has been developed. In this model the combined effect of a depth-dependent subrosion, glass dissolution, and salt rise has been taken into account. The subrosion model SUBRECN and the implementation of this model in the German computer program EMOS4 is presented. A new computer program PANTER is derived from EMOS4. PANTER models releases of radionuclides via subrosion from a disposal site in a salt pillar into the biosphere. For uncertainty and sensitivity analyses the new subrosion model Latin Hypercube Sampling has been used for determine the different values for the uncertain parameters. The influence of the uncertainty in the parameters on the dose calculations has been investigated by the following sensitivity techniques: Spearman Rank Correlation Coefficients, Partial Rank Correlation Coefficients, Standardised Rank Regression Coefficients, and the Smirnov Test. (orig./HP)

  3. Source term modelling parameters for Project-90

    International Nuclear Information System (INIS)

    Shaw, W.; Smith, G.; Worgan, K.; Hodgkinson, D.; Andersson, K.

    1992-04-01

    This document summarises the input parameters for the source term modelling within Project-90. In the first place, the parameters relate to the CALIBRE near-field code which was developed for the Swedish Nuclear Power Inspectorate's (SKI) Project-90 reference repository safety assessment exercise. An attempt has been made to give best estimate values and, where appropriate, a range which is related to variations around base cases. It should be noted that the data sets contain amendments to those considered by KBS-3. In particular, a completely new set of inventory data has been incorporated. The information given here does not constitute a complete set of parameter values for all parts of the CALIBRE code. Rather, it gives the key parameter values which are used in the constituent models within CALIBRE and the associated studies. For example, the inventory data acts as an input to the calculation of the oxidant production rates, which influence the generation of a redox front. The same data is also an initial value data set for the radionuclide migration component of CALIBRE. Similarly, the geometrical parameters of the near-field are common to both sub-models. The principal common parameters are gathered here for ease of reference and avoidance of unnecessary duplication and transcription errors. (au)

  4. Model for Adjustment of Aggregate Forecasts using Fuzzy Logic

    Directory of Open Access Journals (Sweden)

    Taracena–Sanz L. F.

    2010-07-01

    Full Text Available This research suggests a contribution in the implementation of forecasting models. The proposed model is developed with the aim to fit the projection of demand to surroundings of firms, and this is based on three considerations that cause that in many cases the forecasts of the demand are different from reality, such as: 1 one of the problems most difficult to model in the forecasts is the uncertainty related to the information available; 2 the methods traditionally used by firms for the projection of demand mainly are based on past behavior of the market (historical demand; and 3 these methods do not consider in their analysis the factors that are influencing so that the observed behaviour occurs. Therefore, the proposed model is based on the implementation of Fuzzy Logic, integrating the main variables that affect the behavior of market demand, and which are not considered in the classical statistical methods. The model was applied to a bottling of carbonated beverages, and with the adjustment of the projection of demand a more reliable forecast was obtained.

  5. Analysis of Modeling Parameters on Threaded Screws.

    Energy Technology Data Exchange (ETDEWEB)

    Vigil, Miquela S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brake, Matthew Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vangoethem, Douglas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-06-01

    Assembled mechanical systems often contain a large number of bolted connections. These bolted connections (joints) are integral aspects of the load path for structural dynamics, and, consequently, are paramount for calculating a structure's stiffness and energy dissipation prop- erties. However, analysts have not found the optimal method to model appropriately these bolted joints. The complexity of the screw geometry cause issues when generating a mesh of the model. This paper will explore different approaches to model a screw-substrate connec- tion. Model parameters such as mesh continuity, node alignment, wedge angles, and thread to body element size ratios are examined. The results of this study will give analysts a better understanding of the influences of these parameters and will aide in finding the optimal method to model bolted connections.

  6. Method for optimum determination of adjustable parameters in the boiling water reactor core simulator using operating data on flux distribution

    International Nuclear Information System (INIS)

    Kiguchi, T.; Kawai, T.

    1975-01-01

    A method has been developed to optimally and automatically determine the adjustable parameters of the boiling water reactor three-dimensional core simulator FLARE. The steepest gradient method is adopted for the optimization. The parameters are adjusted to best fit the operating data on power distribution measured by traversing in-core probes (TIP). The average error in the calculated TIP readings normalized by the core average is 0.053 at the rated power. The k-infinity correction term has also been derived theoretically to reduce the relatively large error in the calculated TIP readings near the tips of control rods, which is induced by the coarseness of mesh points. By introducing this correction, the average error decreases to 0.047. The void-quality relation is recognized as a function of coolant flow rate. The relation is estimated to fit the measured distributions of TIP reading at the partial power states

  7. Parameter Estimation of Spacecraft Fuel Slosh Model

    Science.gov (United States)

    Gangadharan, Sathya; Sudermann, James; Marlowe, Andrea; Njengam Charles

    2004-01-01

    Fuel slosh in the upper stages of a spinning spacecraft during launch has been a long standing concern for the success of a space mission. Energy loss through the movement of the liquid fuel in the fuel tank affects the gyroscopic stability of the spacecraft and leads to nutation (wobble) which can cause devastating control issues. The rate at which nutation develops (defined by Nutation Time Constant (NTC can be tedious to calculate and largely inaccurate if done during the early stages of spacecraft design. Pure analytical means of predicting the influence of onboard liquids have generally failed. A strong need exists to identify and model the conditions of resonance between nutation motion and liquid modes and to understand the general characteristics of the liquid motion that causes the problem in spinning spacecraft. A 3-D computerized model of the fuel slosh that accounts for any resonant modes found in the experimental testing will allow for increased accuracy in the overall modeling process. Development of a more accurate model of the fuel slosh currently lies in a more generalized 3-D computerized model incorporating masses, springs and dampers. Parameters describing the model include the inertia tensor of the fuel, spring constants, and damper coefficients. Refinement and understanding the effects of these parameters allow for a more accurate simulation of fuel slosh. The current research will focus on developing models of different complexity and estimating the model parameters that will ultimately provide a more realistic prediction of Nutation Time Constant obtained through simulation.

  8. Data registration without explicit correspondence for adjustment of camera orientation parameter estimation

    Science.gov (United States)

    Barsai, Gabor

    provides a path to fuse data from lidar, GIS and digital multispectral images and reconstructing the precise 3-D scene model, without human intervention, regardless of the type of data or features in the data. The data are initially registered to each other using GPS/INS initial positional values, then conjugate features are found in the datasets to refine the registration. The novelty of the research is that no conjugate points are necessary in the various datasets, and registration is performed without human intervention. The proposed system uses the original lidar and GIS data and finds edges of buildings with the help of the digital images, utilizing the exterior orientation parameters to project the lidar points onto the edge extracted image/map. These edge points are then utilized to orient and locate the datasets, in a correct position with respect to each other.

  9. A Unified Model of Geostrophic Adjustment and Frontogenesis

    Science.gov (United States)

    Taylor, John; Shakespeare, Callum

    2013-11-01

    Fronts, or regions with strong horizontal density gradients, are ubiquitous and dynamically important features of the ocean and atmosphere. In the ocean, fronts are associated with enhanced air-sea fluxes, turbulence, and biological productivity, while atmospheric fronts are associated with some of the most extreme weather events. Here, we describe a new mathematical framework for describing the formation of fronts, or frontogenesis. This framework unifies two classical problems in geophysical fluid dynamics, geostrophic adjustment and strain-driven frontogenesis, and provides a number of important extensions beyond previous efforts. The model solutions closely match numerical simulations during the early stages of frontogenesis, and provide a means to describe the development of turbulence at mature fronts.

  10. Proximal Alternating Direction Method with Relaxed Proximal Parameters for the Least Squares Covariance Adjustment Problem

    Directory of Open Access Journals (Sweden)

    Minghua Xu

    2014-01-01

    Full Text Available We consider the problem of seeking a symmetric positive semidefinite matrix in a closed convex set to approximate a given matrix. This problem may arise in several areas of numerical linear algebra or come from finance industry or statistics and thus has many applications. For solving this class of matrix optimization problems, many methods have been proposed in the literature. The proximal alternating direction method is one of those methods which can be easily applied to solve these matrix optimization problems. Generally, the proximal parameters of the proximal alternating direction method are greater than zero. In this paper, we conclude that the restriction on the proximal parameters can be relaxed for solving this kind of matrix optimization problems. Numerical experiments also show that the proximal alternating direction method with the relaxed proximal parameters is convergent and generally has a better performance than the classical proximal alternating direction method.

  11. PERMINTAAN BERAS DI PROVINSI JAMBI (Penerapan Partial Adjustment Model

    Directory of Open Access Journals (Sweden)

    Wasi Riyanto

    2013-07-01

    Full Text Available The purpose of this study is to determine the effect of price of rice, flour prices, population, income of population and demand of rice for a year earlier on rice demand, demand rice elasticity and rice demand prediction in Jambi Province. This study uses secondary data, including time series data for 22 years from 1988 until 2009. The study used some variables, consist of rice demand (Qdt, the price of rice (Hb, the price of wheat flour (Hg, population (Jp, the income of the population (PDRB and demand for rice the previous year (Qdt-1. The make of this study are multiple regression and dynamic analysis a Partial Adjustment Model, where the demand for rice is the dependent variable and the price of rice, flour prices, population, income population and demand of rice last year was the independent variable. Partial Adjustment Model analysis results showed that the effect of changes in prices of rice and flour are not significant to changes in demand for rice. The population and demand of rice the previous year has positive and significant impact on demand for rice, while revenues have negative and significant population of rice demand. Variable price of rice, earning population and the price of flour is inelastic the demand of rice, because rice is not a normal good but as a necessity so that there is no substitution of goods (replacement of rice with other commodities in Jambi Province. Based on the analysis, it is recommended to the government to be able to control the rate of population increase given the variable number of people as one of the factors that affect demand for rice.It is expected that the government also began to socialize in a lifestyle of non-rice food consumption to control the increasing amount of demand for rice. Last suggestion, the government developed a diversification of staple foods other than rice.

  12. Model comparisons and genetic and environmental parameter ...

    African Journals Online (AJOL)

    arc

    Model comparisons and genetic and environmental parameter estimates of growth and the ... breeding strategies and for accurate breeding value estimation. The objectives ...... Sci. 23, 72-76. Van Wyk, J.B., Fair, M.D. & Cloete, S.W.P., 2003.

  13. The rho-parameter in supersymmetric models

    International Nuclear Information System (INIS)

    Lim, C.S.; Inami, T.; Sakai, N.

    1983-10-01

    The electroweak rho-parameter is examined in a general class of supersymmetric models. Formulae are given for one-loop contributions to Δrho from scalar quarks and leptons, gauge-Higgs fermions and an extra doublet of Higgs scalars. Mass differences between members of isodoublet scalar quarks and leptons are constrained to be less than about 200 GeV. (author)

  14. A lumped parameter model of plasma focus

    International Nuclear Information System (INIS)

    Gonzalez, Jose H.; Florido, Pablo C.; Bruzzone, H.; Clausse, Alejandro

    1999-01-01

    A lumped parameter model to estimate neutron emission of a plasma focus (PF) device is developed. The dynamic of the current sheet is calculated using a snowplow model, and the neutron production with the thermal fusion cross section for a deuterium filling gas. The results were contrasted as a function of the filling pressure with experimental measurements of a 3.68 KJ Mather-type PF. (author)

  15. One parameter model potential for noble metals

    International Nuclear Information System (INIS)

    Idrees, M.; Khwaja, F.A.; Razmi, M.S.K.

    1981-08-01

    A phenomenological one parameter model potential which includes s-d hybridization and core-core exchange contributions is proposed for noble metals. A number of interesting properties like liquid metal resistivities, band gaps, thermoelectric powers and ion-ion interaction potentials are calculated for Cu, Ag and Au. The results obtained are in better agreement with experiment than the ones predicted by the other model potentials in the literature. (author)

  16. Electromagnetic structure of pion in the framework of adjusted VMD model with elastic cut

    International Nuclear Information System (INIS)

    Dubnicka, S.; Furdik, I.; Meshcheryakov, V.A.

    1987-01-01

    The vector dominance model (VMD) parametrization of pion form factor is transformed into the pion c.m. momentum variable. Then the corresponding VMD poles are shifted by means of the nonzero widths of vector mesons from the real axis into the complex region of the second sheet of Riemann surface generated by the square-root two-pion-threshold branchpoint. A realistic description of all existing data is achieved in the framework of this adjusted VMD model and the presence of ρ'(1250) and ρ''(1600) mesons in e + e - →π + π - is confirmed by determination of their parameters directly from the fit of data

  17. Risk adjusted receding horizon control of constrained linear parameter varying systems

    NARCIS (Netherlands)

    Sznaier, M.; Lagoa, C.; Stoorvogel, Antonie Arij; Li, X.

    2005-01-01

    In the past few years, control of Linear Parameter Varying Systems (LPV) has been the object of considerable attention, as a way of formalizing the intuitively appealing idea of gain scheduling control for nonlinear systems. However, currently available LPV techniques are both computationally

  18. Calibration of discrete element model parameters: soybeans

    Science.gov (United States)

    Ghodki, Bhupendra M.; Patel, Manish; Namdeo, Rohit; Carpenter, Gopal

    2018-05-01

    Discrete element method (DEM) simulations are broadly used to get an insight of flow characteristics of granular materials in complex particulate systems. DEM input parameters for a model are the critical prerequisite for an efficient simulation. Thus, the present investigation aims to determine DEM input parameters for Hertz-Mindlin model using soybeans as a granular material. To achieve this aim, widely acceptable calibration approach was used having standard box-type apparatus. Further, qualitative and quantitative findings such as particle profile, height of kernels retaining the acrylic wall, and angle of repose of experiments and numerical simulations were compared to get the parameters. The calibrated set of DEM input parameters includes the following (a) material properties: particle geometric mean diameter (6.24 mm); spherical shape; particle density (1220 kg m^{-3} ), and (b) interaction parameters such as particle-particle: coefficient of restitution (0.17); coefficient of static friction (0.26); coefficient of rolling friction (0.08), and particle-wall: coefficient of restitution (0.35); coefficient of static friction (0.30); coefficient of rolling friction (0.08). The results may adequately be used to simulate particle scale mechanics (grain commingling, flow/motion, forces, etc) of soybeans in post-harvest machinery and devices.

  19. Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model

    Science.gov (United States)

    Pande, Vijay S.; Head-Gordon, Teresa; Ponder, Jay W.

    2016-01-01

    A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. The protocol uses an automated procedure, ForceBalance, to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimentally obtained data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The new AMOEBA14 water model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures ranging from 249 K to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to a variety of experimental properties as a function of temperature, including the 2nd virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient and dielectric constant. The viscosity, self-diffusion constant and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2 to 20 water molecules, the AMOEBA14 model yields results similar to the AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model. PMID:25683601

  20. Parameter optimization for surface flux transport models

    Science.gov (United States)

    Whitbread, T.; Yeates, A. R.; Muñoz-Jaramillo, A.; Petrie, G. J. D.

    2017-11-01

    Accurate prediction of solar activity calls for precise calibration of solar cycle models. Consequently we aim to find optimal parameters for models which describe the physical processes on the solar surface, which in turn act as proxies for what occurs in the interior and provide source terms for coronal models. We use a genetic algorithm to optimize surface flux transport models using National Solar Observatory (NSO) magnetogram data for Solar Cycle 23. This is applied to both a 1D model that inserts new magnetic flux in the form of idealized bipolar magnetic regions, and also to a 2D model that assimilates specific shapes of real active regions. The genetic algorithm searches for parameter sets (meridional flow speed and profile, supergranular diffusivity, initial magnetic field, and radial decay time) that produce the best fit between observed and simulated butterfly diagrams, weighted by a latitude-dependent error structure which reflects uncertainty in observations. Due to the easily adaptable nature of the 2D model, the optimization process is repeated for Cycles 21, 22, and 24 in order to analyse cycle-to-cycle variation of the optimal solution. We find that the ranges and optimal solutions for the various regimes are in reasonable agreement with results from the literature, both theoretical and observational. The optimal meridional flow profiles for each regime are almost entirely within observational bounds determined by magnetic feature tracking, with the 2D model being able to accommodate the mean observed profile more successfully. Differences between models appear to be important in deciding values for the diffusive and decay terms. In like fashion, differences in the behaviours of different solar cycles lead to contrasts in parameters defining the meridional flow and initial field strength.

  1. Adjusting inkjet printhead parameters to deposit drugs into micro-sized reservoirs

    Directory of Open Access Journals (Sweden)

    Mau Robert

    2016-09-01

    Full Text Available Drug delivery systems (DDS ensure that therapeutically effective drug concentrations are delivered locally to the target site. For that reason, it is common to coat implants with a degradable polymer which contains drugs. However, the use of polymers as a drug carrier has been associated with adverse side effects. For that reason, several technologies have been developed to design polymer-free DDS. In literature it has been shown that micro-sized reservoirs can be applied as drug reservoirs. Inkjet techniques are capable of depositing drugs into these reservoirs. In this study, two different geometries of micro-sized reservoirs have been laden with a drug (ASA using a drop-on-demand inkjet printhead. Correlations between the characteristics of the drug solution, the operating parameters of the printhead and the geometric parameters of the reservoir are shown. It is indicated that wettability of the surface play a key role for drug deposition into micro-sized reservoirs.

  2. Constant-parameter capture-recapture models

    Science.gov (United States)

    Brownie, C.; Hines, J.E.; Nichols, J.D.

    1986-01-01

    Jolly (1982, Biometrics 38, 301-321) presented modifications of the Jolly-Seber model for capture-recapture data, which assume constant survival and/or capture rates. Where appropriate, because of the reduced number of parameters, these models lead to more efficient estimators than the Jolly-Seber model. The tests to compare models given by Jolly do not make complete use of the data, and we present here the appropriate modifications, and also indicate how to carry out goodness-of-fit tests which utilize individual capture history information. We also describe analogous models for the case where young and adult animals are tagged. The availability of computer programs to perform the analysis is noted, and examples are given using output from these programs.

  3. Permintaan Beras di Provinsi Jambi (Penerapan Partial Adjustment Model

    Directory of Open Access Journals (Sweden)

    Wasi Riyanto

    2013-07-01

    Full Text Available The purpose of this study is to determine the effect of price of rice, flour prices, population, income of population and demand of rice for a year earlier on rice demand, demand rice elasticity and rice demand prediction in Jambi Province. This study uses secondary data, including time series data for 22 years from 1988 until 2009. The study used some variables, consist of rice demand (Qdt, the price of rice (Hb, the price of wheat flour (Hg, population (Jp, the income of the population (PDRB and demand for rice the previous year (Qdt-1. The make of this study are multiple regression and dynamic analysis  a Partial Adjustment Model, where the demand for rice is the dependent variable and the price of rice, flour prices, population, income population and demand of rice last year was the independent variable. Partial Adjustment Model analysis results showed that the effect of changes in prices of rice  and flour are not significant  to  changes in demand for rice. The population and demand of rice the previous year has positive and significant impact on demand for rice, while revenues have negative and significant population of rice demand. Variable price of rice, earning population and the price of flour is inelastic the demand of rice, because rice is not a normal good but as a necessity so that there is no substitution of goods (replacement of rice with other commodities in Jambi Province. Based on the analysis, it is recommended to the government to be able to control the rate of population increase given the variable number of people as one of the factors that affect demand for rice.It is expected that the  government also began  to  socialize  in a lifestyle  of  non-rice food consumption to control the increasing amount of demand for rice. Last suggestion, the government developed a diversification of staple foods other than rice. Keywords: Demand, Rice, Income Population

  4. Dolphins adjust species-specific frequency parameters to compensate for increasing background noise.

    Science.gov (United States)

    Papale, Elena; Gamba, Marco; Perez-Gil, Monica; Martin, Vidal Martel; Giacoma, Cristina

    2015-01-01

    An increase in ocean noise levels could interfere with acoustic communication of marine mammals. In this study we explored the effects of anthropogenic and natural noise on the acoustic properties of a dolphin communication signal, the whistle. A towed array with four elements was used to record environmental background noise and whistles of short-beaked common-, Atlantic spotted- and striped-dolphins in the Canaries archipelago. Four frequency parameters were measured from each whistle, while Sound Pressure Levels (SPL) of the background noise were measured at the central frequencies of seven one-third octave bands, from 5 to 20 kHz. Results show that dolphins increase the whistles' frequency parameters with lower variability in the presence of anthropogenic noise, and increase the end frequency of their whistles when confronted with increasing natural noise. This study provides the first evidence that the synergy among SPLs has a role in shaping the whistles' structure of these three species, with respect to both natural and anthropogenic noise.

  5. Dolphins adjust species-specific frequency parameters to compensate for increasing background noise.

    Directory of Open Access Journals (Sweden)

    Elena Papale

    Full Text Available An increase in ocean noise levels could interfere with acoustic communication of marine mammals. In this study we explored the effects of anthropogenic and natural noise on the acoustic properties of a dolphin communication signal, the whistle. A towed array with four elements was used to record environmental background noise and whistles of short-beaked common-, Atlantic spotted- and striped-dolphins in the Canaries archipelago. Four frequency parameters were measured from each whistle, while Sound Pressure Levels (SPL of the background noise were measured at the central frequencies of seven one-third octave bands, from 5 to 20 kHz. Results show that dolphins increase the whistles' frequency parameters with lower variability in the presence of anthropogenic noise, and increase the end frequency of their whistles when confronted with increasing natural noise. This study provides the first evidence that the synergy among SPLs has a role in shaping the whistles' structure of these three species, with respect to both natural and anthropogenic noise.

  6. Dolphins Adjust Species-Specific Frequency Parameters to Compensate for Increasing Background Noise

    Science.gov (United States)

    Papale, Elena; Gamba, Marco; Perez-Gil, Monica; Martin, Vidal Martel; Giacoma, Cristina

    2015-01-01

    An increase in ocean noise levels could interfere with acoustic communication of marine mammals. In this study we explored the effects of anthropogenic and natural noise on the acoustic properties of a dolphin communication signal, the whistle. A towed array with four elements was used to record environmental background noise and whistles of short-beaked common-, Atlantic spotted- and striped-dolphins in the Canaries archipelago. Four frequency parameters were measured from each whistle, while Sound Pressure Levels (SPL) of the background noise were measured at the central frequencies of seven one-third octave bands, from 5 to 20 kHz. Results show that dolphins increase the whistles’ frequency parameters with lower variability in the presence of anthropogenic noise, and increase the end frequency of their whistles when confronted with increasing natural noise. This study provides the first evidence that the synergy among SPLs has a role in shaping the whistles' structure of these three species, with respect to both natural and anthropogenic noise. PMID:25853825

  7. To tune or not to tune : recommending when to adjust SVM hyper-parameters via Meta-learning

    NARCIS (Netherlands)

    Gomes Mantovani, R.; Rossi, A.L.D.; Vanschoren, J.; Bischl, B.; Carvalho, A.C.P.L.F.

    2015-01-01

    Many classification algorithms, such as Neural Networks and Support Vector Machines, have a range of hyper-parameters that may strongly affect the predictive performance of the models induced by them. Hence, it is recommended to define the values of these hyper-parameters using optimization

  8. Adjusting the Adjusted X[superscript 2]/df Ratio Statistic for Dichotomous Item Response Theory Analyses: Does the Model Fit?

    Science.gov (United States)

    Tay, Louis; Drasgow, Fritz

    2012-01-01

    Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…

  9. Dosimetry-adjusted reactor physics parameters for pressure vessel neutron exposure assessment

    International Nuclear Information System (INIS)

    McElroy, W.N.; Kellogg, L.S.

    1988-01-01

    The ASTM E706 master matrix standard describes a series of 20 American Society for Testing and Materials (ASTM) standard practices, guides, and methods for use in the prediction of neutron-induced changes in light water reactor (LWR) pressure vessel (PV) and support structure steels throughout a PV's service life. Some of these are existing ASTM standards, some are ASTM standards that have been modified, and some are new ASTM standards. These standards are periodically revised to assume their applicability during the 40-yr (32 effective full-power years) design license period for a nuclear power plant. They are now under review by two new ASTM plant life extension task groups: E10.05.11 on physics dosimetry and E10.02.11 on metallurgy. A brief review on the current application of these standards and a discussion of the status of work to verify the accuracy of derived physics-dosimetry parameter values is presented in this paper

  10. CPU time optimization and precise adjustment of the Geant4 physics parameters for a VARIAN 2100 C/D gamma radiotherapy linear accelerator simulation using GAMOS

    Science.gov (United States)

    Arce, Pedro; Lagares, Juan Ignacio

    2018-02-01

    We have verified the GAMOS/Geant4 simulation model of a 6 MV VARIAN Clinac 2100 C/D linear accelerator by the procedure of adjusting the initial beam parameters to fit the percentage depth dose and cross-profile dose experimental data at different depths in a water phantom. Thanks to the use of a wide range of field sizes, from 2  ×  2 cm2 to 40  ×  40 cm2, a small phantom voxel size and high statistics, fine precision in the determination of the beam parameters has been achieved. This precision has allowed us to make a thorough study of the different physics models and parameters that Geant4 offers. The three Geant4 electromagnetic physics sets of models, i.e. Standard, Livermore and Penelope, have been compared to the experiment, testing the four different models of angular bremsstrahlung distributions as well as the three available multiple-scattering models, and optimizing the most relevant Geant4 electromagnetic physics parameters. Before the fitting, a comprehensive CPU time optimization has been done, using several of the Geant4 efficiency improvement techniques plus a few more developed in GAMOS.

  11. Modelling tourists arrival using time varying parameter

    Science.gov (United States)

    Suciptawati, P.; Sukarsa, K. G.; Kencana, Eka N.

    2017-06-01

    The importance of tourism and its related sectors to support economic development and poverty reduction in many countries increase researchers’ attentions to study and model tourists’ arrival. This work is aimed to demonstrate time varying parameter (TVP) technique to model the arrival of Korean’s tourists to Bali. The number of Korean tourists whom visiting Bali for period January 2010 to December 2015 were used to model the number of Korean’s tourists to Bali (KOR) as dependent variable. The predictors are the exchange rate of Won to IDR (WON), the inflation rate in Korea (INFKR), and the inflation rate in Indonesia (INFID). Observing tourists visit to Bali tend to fluctuate by their nationality, then the model was built by applying TVP and its parameters were approximated using Kalman Filter algorithm. The results showed all of predictor variables (WON, INFKR, INFID) significantly affect KOR. For in-sample and out-of-sample forecast with ARIMA’s forecasted values for the predictors, TVP model gave mean absolute percentage error (MAPE) as much as 11.24 percent and 12.86 percent, respectively.

  12. Using Green's Functions to initialize and adjust a global, eddying ocean biogeochemistry general circulation model

    Science.gov (United States)

    Brix, H.; Menemenlis, D.; Hill, C.; Dutkiewicz, S.; Jahn, O.; Wang, D.; Bowman, K.; Zhang, H.

    2015-11-01

    The NASA Carbon Monitoring System (CMS) Flux Project aims to attribute changes in the atmospheric accumulation of carbon dioxide to spatially resolved fluxes by utilizing the full suite of NASA data, models, and assimilation capabilities. For the oceanic part of this project, we introduce ECCO2-Darwin, a new ocean biogeochemistry general circulation model based on combining the following pre-existing components: (i) a full-depth, eddying, global-ocean configuration of the Massachusetts Institute of Technology general circulation model (MITgcm), (ii) an adjoint-method-based estimate of ocean circulation from the Estimating the Circulation and Climate of the Ocean, Phase II (ECCO2) project, (iii) the MIT ecosystem model "Darwin", and (iv) a marine carbon chemistry model. Air-sea gas exchange coefficients and initial conditions of dissolved inorganic carbon, alkalinity, and oxygen are adjusted using a Green's Functions approach in order to optimize modeled air-sea CO2 fluxes. Data constraints include observations of carbon dioxide partial pressure (pCO2) for 2009-2010, global air-sea CO2 flux estimates, and the seasonal cycle of the Takahashi et al. (2009) Atlas. The model sensitivity experiments (or Green's Functions) include simulations that start from different initial conditions as well as experiments that perturb air-sea gas exchange parameters and the ratio of particulate inorganic to organic carbon. The Green's Functions approach yields a linear combination of these sensitivity experiments that minimizes model-data differences. The resulting initial conditions and gas exchange coefficients are then used to integrate the ECCO2-Darwin model forward. Despite the small number (six) of control parameters, the adjusted simulation is significantly closer to the data constraints (37% cost function reduction, i.e., reduction in the model-data difference, relative to the baseline simulation) and to independent observations (e.g., alkalinity). The adjusted air-sea gas

  13. Luminescence model with quantum impact parameter for low energies

    International Nuclear Information System (INIS)

    Cruz G, H.S.; Michaelian, K.; Galindo U, S.; Martinez D, A.; Belmont M, E.

    2000-01-01

    The analytical model of induced light production in scintillator materials by energetic ions proposed by Michaelian and Menchaca (M-M) adjusts very well the luminescence substance data in a wide energy interval of the incident ions (10-100 MeV). However at low energies, that is, under to 10 MeV, the experimental deviations of the predictions of M-M model, show that the causes may be certain physical effects, all they important at low energies, which were not considered. We have modified lightly the M-M model using the basic fact that the Quantum mechanics gives to a different limit for the quantum impact parameter instead of the classic approximation. (Author)

  14. Disaster Hits Home: A Model of Displaced Family Adjustment after Hurricane Katrina

    Science.gov (United States)

    Peek, Lori; Morrissey, Bridget; Marlatt, Holly

    2011-01-01

    The authors explored individual and family adjustment processes among parents (n = 30) and children (n = 55) who were displaced to Colorado after Hurricane Katrina. Drawing on in-depth interviews with 23 families, this article offers an inductive model of displaced family adjustment. Four stages of family adjustment are presented in the model: (a)…

  15. The lumped parameter model for fuel pins

    Energy Technology Data Exchange (ETDEWEB)

    Liu, W S [Ontario Hydro, Toronto, ON (Canada)

    1996-12-31

    The use of a lumped fuel-pin model in a thermal-hydraulic code is advantageous because of computational simplicity and efficiency. The model uses an averaging approach over the fuel cross section and makes some simplifying assumptions to describe the transient equations for the averaged fuel, fuel centerline and sheath temperatures. It is shown that by introducing a factor in the effective fuel conductivity, the analytical solution of the mean fuel temperature can be modified to simulate the effects of the flux depression in the heat generation rate and the variation in fuel thermal conductivity. The simplified analytical method used in the transient equation is presented. The accuracy of the lumped parameter model has been compared with the results from the finite difference method. (author). 4 refs., 2 tabs., 4 figs.

  16. Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework.

    Science.gov (United States)

    El-Assady, Mennatallah; Sevastjanova, Rita; Sperrle, Fabian; Keim, Daniel; Collins, Christopher

    2018-01-01

    Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.

  17. Radar adjusted data versus modelled precipitation: a case study over Cyprus

    Directory of Open Access Journals (Sweden)

    M. Casaioli

    2006-01-01

    Full Text Available In the framework of the European VOLTAIRE project (Fifth Framework Programme, simulations of relatively heavy precipitation events, which occurred over the island of Cyprus, by means of numerical atmospheric models were performed. One of the aims of the project was indeed the comparison of modelled rainfall fields with multi-sensor observations. Thus, for the 5 March 2003 event, the 24-h accumulated precipitation BOlogna Limited Area Model (BOLAM forecast was compared with the available observations reconstructed from ground-based radar data and estimated by rain gauge data. Since radar data may be affected by errors depending on the distance from the radar, these data could be range-adjusted by using other sensors. In this case, the Precipitation Radar aboard the Tropical Rainfall Measuring Mission (TRMM satellite was used to adjust the ground-based radar data with a two-parameter scheme. Thus, in this work, two observational fields were employed: the rain gauge gridded analysis and the observational analysis obtained by merging the range-adjusted radar and rain gauge fields. In order to verify the modelled precipitation, both non-parametric skill scores and the contiguous rain area (CRA analysis were applied. Skill score results show some differences when using the two observational fields. CRA results are instead quite in agreement, showing that in general a 0.27° eastward shift optimizes the forecast with respect to the two observational analyses. This result is also supported by a subjective inspection of the shifted forecast field, whose gross features agree with the analysis pattern more than the non-shifted forecast one. However, some open questions, especially regarding the effect of other range adjustment techniques, remain open and need to be addressed in future works.

  18. Modeling of Parameters of Subcritical Assembly SAD

    CERN Document Server

    Petrochenkov, S; Puzynin, I

    2005-01-01

    The accepted conceptual design of the experimental Subcritical Assembly in Dubna (SAD) is based on the MOX core with a nominal unit capacity of 25 kW (thermal). This corresponds to the multiplication coefficient $k_{\\rm eff} =0.95$ and accelerator beam power 1 kW. A subcritical assembly driven with the existing 660 MeV proton accelerator at the Joint Institute for Nuclear Research has been modelled in order to make choice of the optimal parameters for the future experiments. The Monte Carlo method was used to simulate neutron spectra, energy deposition and doses calculations. Some of the calculation results are presented in the paper.

  19. Parameter estimation in fractional diffusion models

    CERN Document Server

    Kubilius, Kęstutis; Ralchenko, Kostiantyn

    2017-01-01

    This book is devoted to parameter estimation in diffusion models involving fractional Brownian motion and related processes. For many years now, standard Brownian motion has been (and still remains) a popular model of randomness used to investigate processes in the natural sciences, financial markets, and the economy. The substantial limitation in the use of stochastic diffusion models with Brownian motion is due to the fact that the motion has independent increments, and, therefore, the random noise it generates is “white,” i.e., uncorrelated. However, many processes in the natural sciences, computer networks and financial markets have long-term or short-term dependences, i.e., the correlations of random noise in these processes are non-zero, and slowly or rapidly decrease with time. In particular, models of financial markets demonstrate various kinds of memory and usually this memory is modeled by fractional Brownian diffusion. Therefore, the book constructs diffusion models with memory and provides s...

  20. Capital adjustment cost and bias in income based dynamic panel models with fixed effects

    OpenAIRE

    Yoseph Yilma Getachew; Keshab Bhattarai; Parantap Basu

    2012-01-01

    The fixed effects (FE) estimator of "conditional convergence" in income based dynamic panel models could be biased downward when capital adjustment cost is present. Such a capital adjustment cost means a rising marginal cost of investment which could slow down the convergence. The standard FE regression fails to take into account of this capital adjustment cost and thus it could overestimate the rate of convergence. Using a Ramsey model with long-run adjustment cost of capital, we characteriz...

  1. Moose models with vanishing S parameter

    International Nuclear Information System (INIS)

    Casalbuoni, R.; De Curtis, S.; Dominici, D.

    2004-01-01

    In the linear moose framework, which naturally emerges in deconstruction models, we show that there is a unique solution for the vanishing of the S parameter at the lowest order in the weak interactions. We consider an effective gauge theory based on K SU(2) gauge groups, K+1 chiral fields, and electroweak groups SU(2) L and U(1) Y at the ends of the chain of the moose. S vanishes when a link in the moose chain is cut. As a consequence one has to introduce a dynamical nonlocal field connecting the two ends of the moose. Then the model acquires an additional custodial symmetry which protects this result. We examine also the possibility of a strong suppression of S through an exponential behavior of the link couplings as suggested by the Randall Sundrum metric

  2. Adjustment costs in a two-capital growth model

    Czech Academy of Sciences Publication Activity Database

    Duczynski, Petr

    2002-01-01

    Roč. 26, č. 5 (2002), s. 837-850 ISSN 0165-1889 R&D Projects: GA AV ČR KSK9058117 Institutional research plan: CEZ:AV0Z7085904 Keywords : adjustment costs * capital mobility * convergence * human capital Subject RIV: AH - Economics Impact factor: 0.738, year: 2002

  3. Models for setting ATM parameter values

    DEFF Research Database (Denmark)

    Blaabjerg, Søren; Gravey, A.; Romæuf, L.

    1996-01-01

    essential to set traffic characteristic values that are relevant to the considered cell stream, and that ensure that the amount of non-conforming traffic is small. Using a queueing model representation for the GCRA formalism, several methods are available for choosing the traffic characteristics. This paper......In ATM networks, a user should negotiate at connection set-up a traffic contract which includes traffic characteristics and requested QoS. The traffic characteristics currently considered are the Peak Cell Rate, the Sustainable Cell Rate, the Intrinsic Burst Tolerance and the Cell Delay Variation...... (CDV) tolerance(s). The values taken by these traffic parameters characterize the so-called ''Worst Case Traffic'' that is used by CAC procedures for accepting a new connection and allocating resources to it. Conformance to the negotiated traffic characteristics is defined, at the ingress User...

  4. Microwave, infrared and Raman spectra, adjusted r{sub 0} structural parameters, conformational stability, and vibrational assignment of cyclopropylfluorosilane

    Energy Technology Data Exchange (ETDEWEB)

    Panikar, Savitha S. [Department of Chemistry, University of Missouri-Kansas City, Kansas City, MO 64110 (United States); Guirgis, Gamil A.; Eddens, Matthew T.; Dukes, Horace W. [Department of Chemistry and Biochemistry, College of Charleston, Charleston, SC 29424 (United States); Conrad, Andrew R.; Tubergen, Michael J. [Department of Chemistry, Kent State University, Kent, OH 44242 (United States); Gounev, Todor K. [Department of Chemistry, University of Missouri-Kansas City, Kansas City, MO 64110 (United States); Durig, James R., E-mail: durigj@umkc.edu [Department of Chemistry, University of Missouri-Kansas City, Kansas City, MO 64110 (United States)

    2013-03-29

    Highlights: ► The most stable gauche conformer has been identified from microwave spectra. ► Enthalpy difference has been determined between the two forms. ► Adjusted r{sub 0} structures were obtained for the gauche form. ► Ab initio calculations were performed for the two conformers. - Abstract: FT-microwave, infrared spectra of gas and Raman spectra of liquid for cyclopropylfluorosilane, c-C{sub 3}H{sub 5}SiH{sub 2}F have been recorded. 51 transitions for the {sup 28}Si, {sup 29}Si, and {sup 30}Si isotopomers have been assigned for the gauche conformer. Enthalpy differences in xenon solution by variable temperature infrared spectra between the more stable gauche and lesser stable cis form gave 109 ± 9 cm{sup −1}. From the microwave rotational constants for the three isotopomers ({sup 28}Si, {sup 29}Si, {sup 30}Si) combined with structural parameters predicted from MP2(full)/6-311+G(d, p) calculations, adjusted r{sub 0} structural parameters were obtained for the gauche conformer. The heavy atom distances (Å): Si–C{sub 2} = 1.836(3); C{sub 2}–C{sub 4} = 1.525(3); C{sub 2}–C{sub 5} = 1.519(3); C{sub 4}–C{sub 5} = 1.494(3); Si–F = 1.594(3) and angles (°): ∠CSiF = 111.2(5); ∠SiC{sub 2}C{sub 4} = 117.5(5); ∠SiC{sub 2}C{sub 5} = 119.2(5). To support the vibrational assignments, MP2(full)/6-31G(d) calculations were carried out. Results are discussed and compared to the corresponding properties of some similar molecules.

  5. Player Modeling Using HOSVD towards Dynamic Difficulty Adjustment in Videogames

    OpenAIRE

    Anagnostou , Kostas; Maragoudakis , Manolis

    2012-01-01

    Part 3: Second International Workshop on Computational Intelligence in Software Engineering (CISE 2012); International audience; In this work, we propose and evaluate a Higher Order Singular Value Decomposition (HOSVD) of a tensor as a means to classify player behavior and adjust game difficulty dynamically. Applying this method to player data collected during a plethora of game sessions resulted in a reduction of the dimensionality of the classification problem and a robust classification of...

  6. ORBSIM- ESTIMATING GEOPHYSICAL MODEL PARAMETERS FROM PLANETARY GRAVITY DATA

    Science.gov (United States)

    Sjogren, W. L.

    1994-01-01

    The ORBSIM program was developed for the accurate extraction of geophysical model parameters from Doppler radio tracking data acquired from orbiting planetary spacecraft. The model of the proposed planetary structure is used in a numerical integration of the spacecraft along simulated trajectories around the primary body. Using line of sight (LOS) Doppler residuals, ORBSIM applies fast and efficient modelling and optimization procedures which avoid the traditional complex dynamic reduction of data. ORBSIM produces quantitative geophysical results such as size, depth, and mass. ORBSIM has been used extensively to investigate topographic features on the Moon, Mars, and Venus. The program has proven particulary suitable for modelling gravitational anomalies and mascons. The basic observable for spacecraft-based gravity data is the Doppler frequency shift of a transponded radio signal. The time derivative of this signal carries information regarding the gravity field acting on the spacecraft in the LOS direction (the LOS direction being the path between the spacecraft and the receiving station, either Earth or another satellite). There are many dynamic factors taken into account: earth rotation, solar radiation, acceleration from planetary bodies, tracking station time and location adjustments, etc. The actual trajectories of the spacecraft are simulated using least squares fitted to conic motion. The theoretical Doppler readings from the simulated orbits are compared to actual Doppler observations and another least squares adjustment is made. ORBSIM has three modes of operation: trajectory simulation, optimization, and gravity modelling. In all cases, an initial gravity model of curved and/or flat disks, harmonics, and/or a force table are required input. ORBSIM is written in FORTRAN 77 for batch execution and has been implemented on a DEC VAX 11/780 computer operating under VMS. This program was released in 1985.

  7. Utilizing Visual Effects Software for Efficient and Flexible Isostatic Adjustment Modelling

    Science.gov (United States)

    Meldgaard, A.; Nielsen, L.; Iaffaldano, G.

    2017-12-01

    The isostatic adjustment signal generated by transient ice sheet loading is an important indicator of past ice sheet extent and the rheological constitution of the interior of the Earth. Finite element modelling has proved to be a very useful tool in these studies. We present a simple numerical model for 3D visco elastic Earth deformation and a new approach to the design of such models utilizing visual effects software designed for the film and game industry. The software package Houdini offers an assortment of optimized tools and libraries which greatly facilitate the creation of efficient numerical algorithms. In particular, we make use of Houdini's procedural work flow, the SIMD programming language VEX, Houdini's sparse matrix creation and inversion libraries, an inbuilt tetrahedralizer for grid creation, and the user interface, which facilitates effortless manipulation of 3D geometry. We mitigate many of the time consuming steps associated with the authoring of efficient algorithms from scratch while still keeping the flexibility that may be lost with the use of commercial dedicated finite element programs. We test the efficiency of the algorithm by comparing simulation times with off-the-shelf solutions from the Abaqus software package. The algorithm is tailored for the study of local isostatic adjustment patterns, in close vicinity to present ice sheet margins. In particular, we wish to examine possible causes for the considerable spatial differences in the uplift magnitude which are apparent from field observations in these areas. Such features, with spatial scales of tens of kilometres, are not resolvable with current global isostatic adjustment models, and may require the inclusion of local topographic features. We use the presented algorithm to study a near field area where field observations are abundant, namely, Disko Bay in West Greenland with the intention of constraining Earth parameters and ice thickness. In addition, we assess how local

  8. Delayed heart rate recovery after exercise as a risk factor of incident type 2 diabetes mellitus after adjusting for glycometabolic parameters in men.

    Science.gov (United States)

    Yu, Tae Yang; Jee, Jae Hwan; Bae, Ji Cheol; Hong, Won-Jung; Jin, Sang-Man; Kim, Jae Hyeon; Lee, Moon-Kyu

    2016-10-15

    Some studies have reported that delayed heart rate recovery (HRR) after exercise is associated with incident type 2 diabetes mellitus (T2DM). This study aimed to investigate the longitudinal association of delayed HRR following a graded exercise treadmill test (GTX) with the development of T2DM including glucose-associated parameters as an adjusting factor in healthy Korean men. Analyses including fasting plasma glucose, HOMA-IR, HOMA-β, and HbA1c as confounding factors and known confounders were performed. HRR was calculated as peak heart rate minus heart rate after a 1-min rest (HRR 1). Cox proportional hazards model was used to quantify the independent association between HRR and incident T2DM. During 9082 person-years of follow-up between 2006 and 2012, there were 180 (10.1%) incident cases of T2DM. After adjustment for age, BMI, systolic BP, diastolic BP, smoking status, peak heart rate, peak oxygen uptake, TG, LDL-C, HDL-C, fasting plasma glucose, HOMA-IR, HOMA-β, and HbA1c, the hazard ratios (HRs) [95% confidence interval (CI)] of incident T2DM comparing the second and third tertiles to the first tertile of HRR 1 were 0.867 (0.609-1.235) and 0.624 (0.426-0.915), respectively (p for trend=0.017). As a continuous variable, in the fully-adjusted model, the HR (95% CI) of incident T2DM associated with each 1 beat increase in HRR 1 was 0.980 (0.960-1.000) (p=0.048). This study demonstrated that delayed HRR after exercise predicts incident T2DM in men, even after adjusting for fasting glucose, HOMA-IR, HOMA-β, and HbA1c. However, only HRR 1 had clinical significance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Adjusting kinematics and kinetics in a feedback-controlled toe walking model

    Directory of Open Access Journals (Sweden)

    Olenšek Andrej

    2012-08-01

    Full Text Available Abstract Background In clinical gait assessment, the correct interpretation of gait kinematics and kinetics has a decisive impact on the success of the therapeutic programme. Due to the vast amount of information from which primary anomalies should be identified and separated from secondary compensatory changes, as well as the biomechanical complexity and redundancy of the human locomotion system, this task is considerably challenging and requires the attention of an experienced interdisciplinary team of experts. The ongoing research in the field of biomechanics suggests that mathematical modeling may facilitate this task. This paper explores the possibility of generating a family of toe walking gait patterns by systematically changing selected parameters of a feedback-controlled model. Methods From the selected clinical case of toe walking we identified typical toe walking characteristics and encoded them as a set of gait-oriented control objectives to be achieved in a feedback-controlled walking model. They were defined as fourth order polynomials and imposed via feedback control at the within-step control level. At the between-step control level, stance leg lengthening velocity at the end of the single support phase was adaptively adjusted after each step so as to facilitate gait velocity control. Each time the gait velocity settled at the desired value, selected intra-step gait characteristics were modified by adjusting the polynomials so as to mimic the effect of a typical therapeutical intervention - inhibitory casting. Results By systematically adjusting the set of control parameters we were able to generate a family of gait kinematic and kinetic patterns that exhibit similar principal toe walking characteristics, as they were recorded by means of an instrumented gait analysis system in the selected clinical case of toe walking. We further acknowledge that they to some extent follow similar improvement tendencies as those which one can

  10. Structural Adjustment Policy Experiments: The Use of Philippine CGE Models

    OpenAIRE

    Cororaton, Caesar B.

    1994-01-01

    This paper reviews the general structure of the following general computable general equilibrium (CGE): the APEX model, Habito’s second version of the PhilCGE model, Cororaton’s CGE model and Bautista’s first CGE model. These models are chosen as they represent the range of recently constructed CGE models of the Philippine economy. They also represent two schools of thought in CGE modeling: the well defined neoclassical, Walrasian, general equilibrium school where the market-clearing variable...

  11. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Maity, Arnab; Carroll, Raymond J.

    2013-01-01

    PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus

  12. TruMicro Series 2000 sub-400 fs class industrial fiber lasers: adjustment of laser parameters to process requirements

    Science.gov (United States)

    Kanal, Florian; Kahmann, Max; Tan, Chuong; Diekamp, Holger; Jansen, Florian; Scelle, Raphael; Budnicki, Aleksander; Sutter, Dirk

    2017-02-01

    The matchless properties of ultrashort laser pulses, such as the enabling of cold processing and non-linear absorption, pave the way to numerous novel applications. Ultrafast lasers arrived in the last decade at a level of reliability suitable for the industrial environment.1 Within the next years many industrial manufacturing processes in several markets will be replaced by laser-based processes due to their well-known benefits: These are non-contact wear-free processing, higher process accuracy or an increase of processing speed and often improved economic efficiency compared to conventional processes. Furthermore, new processes will arise with novel sources, addressing previously unsolved challenges. One technical requirement for these exciting new applications will be to optimize the large number of available parameters to the requirements of the application. In this work we present an ultrafast laser system distinguished by its capability to combine high flexibility and real time process-inherent adjustments of the parameters with industry-ready reliability. This industry-ready reliability is ensured by a long experience in designing and building ultrashort-pulse lasers in combination with rigorous optimization of the mechanical construction, optical components and the entire laser head for continuous performance. By introducing a new generation of mechanical design in the last few years, TRUMPF enabled its ultrashort-laser platforms to fulfill the very demanding requirements for passively coupling high-energy single-mode radiation into a hollow-core transport fiber. The laser architecture presented here is based on the all fiber MOPA (master oscillator power amplifier) CPA (chirped pulse amplification) technology. The pulses are generated in a high repetition rate mode-locked fiber oscillator also enabling flexible pulse bursts (groups of multiple pulses) with 20 ns intra-burst pulse separation. An external acousto-optic modulator (XAOM) enables linearization

  13. Modeling of an Adjustable Beam Solid State Light

    Data.gov (United States)

    National Aeronautics and Space Administration — This proposal is for the development of a computational model of a prototype variable beam light source using optical modeling software, Zemax OpticStudio ®. The...

  14. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    International Nuclear Information System (INIS)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.

    2015-01-01

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available

  15. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    Science.gov (United States)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.

    2015-12-01

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  16. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V., E-mail: Yu.Kuyanov@gmail.com; Tkachenko, N. P. [Institute for High Energy Physics, National Research Center Kurchatov Institute, COMPAS Group (Russian Federation)

    2015-12-15

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  17. The impact of highway base-saturation flow rate adjustment on Kuwait's transport and environmental parameters estimation.

    Science.gov (United States)

    AlRukaibi, Fahad; AlKheder, Sharaf; Al-Rukaibi, Duaij; Al-Burait, Abdul-Aziz

    2018-03-23

    Traditional transportation systems' management and operation mainly focused on improving traffic mobility and safety without imposing any environmental concerns. Transportation and environmental issues are interrelated and affected by the same parameters especially at signalized intersections. Additionally, traffic congestion at signalized intersections has a major contribution in the environmental problem as related to vehicle emission, fuel consumption, and delay. Therefore, signalized intersections' design and operation is an important parameter to minimize the impact on the environment. The design and operation of signalized intersections are highly dependent on the base saturation flow rate (BSFR). Highway Capacity Manual (HCM) uses a base-saturation flow rate of 1900-passenger car/h/lane for areas with a population intensity greater than or equal to 250,000 and a value of 1750-passenger car/h/lane for less populated areas. The base-saturation flow rate value in HCM is derived from a field data collected in developed countries. The adopted value in Kuwait is 1800passengercar/h/lane, which is the value that used in this analysis as a basis for comparison. Due to the difference in behavior between drivers in developed countries and their fellows in Kuwait, an adjustment was made to the base-saturation flow rate to represent Kuwait's traffic and environmental conditions. The reduction in fuel consumption and vehicles' emission after modifying the base-saturation flow rate (BSFR increased by 12.45%) was about 34% on average. Direct field measurements of the saturation flow rate were used while using the air quality mobile lab to calculate emissions' rates. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. A 16.3 pJ/pulse low-complexity and energy-efficient transmitter with adjustable pulse parameters

    International Nuclear Information System (INIS)

    Jiang Jun; Zhao Yi; Shao Ke; Chen Hu; Xia Lingli; Hong Zhiliang

    2011-01-01

    This paper presents a novel, fully integrated transmitter for 3-5 GHz pulsed UWB. The BPSK modulation transmitter has been implemented in SMIC CMOS 0.13 μm technology with a 1.2-V supply voltage and a die size of 0.8 x 0.95 mm 2 . This transmitter is based on the impulse response filter method, which uses a tunable R paralleled with a LC frequency selection network to realize continuously adjustable pulse parameters, including bandwidth, width and amplitude. Due to the extremely low duty of the pulsed UWB, a proposed output buffer is employed to save power consumption significantly. Finally, measurement results show that the transmitter consumes only 16.3 pJ/pulse to achieve a pulse repetition rate of 100 Mb/s. Generated pulses strictly comply with the FCC spectral mask. The continuously variable pulse width is from 900 to 1.5 ns and the amplitude with the minimum 178 mVpp and the maximum 432 mVpp can be achieved. (semiconductor integrated circuits)

  19. A Robust and Fast Method to Compute Shallow States without Adjustable Parameters: Simulations for a Silicon-Based Qubit

    Science.gov (United States)

    Debernardi, Alberto; Fanciulli, Marco

    Within the framework of the envelope function approximation we have computed - without adjustable parameters and with a reduced computational effort due to analytical expression of relevant Hamiltonian terms - the energy levels of the shallow P impurity in silicon and the hyperfine and superhyperfine splitting of the ground state. We have studied the dependence of these quantities on the applied external electric field along the [001] direction. Our results reproduce correctly the experimental splitting of the impurity ground states detected at zero electric field and provide reliable predictions for values of the field where experimental data are lacking. Further, we have studied the effect of confinement of a shallow state of a P atom at the center of a spherical Si-nanocrystal embedded in a SiO2 matrix. In our simulations the valley-orbit interaction of a realistically screened Coulomb potential and of the core potential are included exactly, within the numerical accuracy due to the use of a finite basis set, while band-anisotropy effects are taken into account within the effective-mass approximation.

  20. Comprehensive Study of Z-Cut Highly Integrated LiNbO3 Optical Modulator with Adjustable Chirp Parameters

    Science.gov (United States)

    Palodiya, Vikram; Raghuwanshi, Sanjeev Kumar

    2017-12-01

    In this paper, the domain inversion is used in a simple fashion to improve the performance of a Z-cut highly integrated LiNbO3 optical modulator (LNOM). The Z-cut modulator having ≤ 3 V switching voltage and bandwidth of 15 GHz for an external modulator in which traveling-wave electrode length L_{m} imposed the modulating voltage, the product of V_π and L_{m} is fixed for a given electro-optic material (EOM). An investigation to achieve a low V_π by both magnitude of the electro-optic coefficient (EOC) for a wide variety of EOMs has been reported. The Sellmeier equation (SE) for the extraordinary index of congruent LiNbO3 is derived. The predictions related to phase matching are accurate between room temperature and 250 °C and wavelength ranging from 0.4 to 5 μm. The SE predicts more accurate refractive indices (RI) at long wavelengths. The different overlaps between the waveguides for the Z-cut structure are shown to yield a chirp parameter that can able to adjust 0-0.7. Theoretical results are perfectly verified by simulated results.

  1. Husbands' perceptions of their wives' breast cancer coping efficacy: testing congruence models of adjustment.

    Science.gov (United States)

    Merluzzi, Thomas V; Martinez Sanchez, MaryAnn

    2018-01-01

    Recent reviews have reinforced the notion that having a supportive spouse can help with the process of coping with and adjusting to cancer. Congruence between spouses' perspectives has been proposed as one mechanism in that process, yet alternative models of congruence have not been examined closely. This study assessed alternative models of congruence in perceptions of coping and their mediating effects on adjustment to breast cancer. Seventy-two women in treatment for breast cancer and their husbands completed measures of marital adjustment, self-efficacy for coping, and adjustment to cancer. Karnofsky Performance Status was obtained from medical records. Wives completed a measure of self-efficacy for coping (wives' ratings of self-efficacy for coping [WSEC]) and husbands completed a measure of self-efficacy for coping (husbands' ratings of wives' self-efficacy for coping [HSEC]) based on their perceptions of their wives' coping efficacy. Interestingly, the correlation between WSEC and HSEC was only 0.207; thus, they are relatively independent perspectives. The following three models were tested to determine the nature of the relationship between WSEC and HSEC: discrepancy model (WSEC - HSEC), additive model (WSEC + HSEC), and multiplicative model (WSEC × HSEC). The discrepancy model was not related to wives' adjustment; however, the additive ( B =0.205, P <0.001) and multiplicative ( B =0.001, P <0.001) models were significantly related to wives' adjustment. Also, the additive model mediated the relationship between performance status and adjustment. Husbands' perception of their wives' coping efficacy contributed marginally to their wives' adjustment, and the combination of WSEC and HSEC mediated the relationship between functional status and wives' adjustment, thus positively impacting wives' adjustment to cancer. Future research is needed to determine the quality of the differences between HSEC and WSEC in order to develop interventions to optimize the

  2. Adjusting multistate capture-recapture models for misclassification bias: manatee breeding proportions

    Science.gov (United States)

    Kendall, W.L.; Hines, J.E.; Nichols, J.D.

    2003-01-01

    Matrix population models are important tools for research and management of populations. Estimating the parameters of these models is an important step in applying them to real populations. Multistate capture-recapture methods have provided a useful means for estimating survival and parameters of transition between locations or life history states but have mostly relied on the assumption that the state occupied by each detected animal is known with certainty. Nevertheless, in some cases animals can be misclassified. Using multiple capture sessions within each period of interest, we developed a method that adjusts estimates of transition probabilities for bias due to misclassification. We applied this method to 10 years of sighting data for a population of Florida manatees (Trichechus manatus latirostris) in order to estimate the annual probability of transition from nonbreeding to breeding status. Some sighted females were unequivocally classified as breeders because they were clearly accompanied by a first-year calf. The remainder were classified, sometimes erroneously, as nonbreeders because an attendant first-year calf was not observed or was classified as more than one year old. We estimated a conditional breeding probability of 0.31 + 0.04 (estimate + 1 SE) when we ignored misclassification bias, and 0.61 + 0.09 when we accounted for misclassification.

  3. An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S. K.; Kang, G. B.; Ko, W. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole.

  4. An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation

    International Nuclear Information System (INIS)

    Kim, S. K.; Kang, G. B.; Ko, W. I.

    2013-01-01

    Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole

  5. Models for estimating photosynthesis parameters from in situ production profiles

    Science.gov (United States)

    Kovač, Žarko; Platt, Trevor; Sathyendranath, Shubha; Antunović, Suzana

    2017-12-01

    The rate of carbon assimilation in phytoplankton primary production models is mathematically prescribed with photosynthesis irradiance functions, which convert a light flux (energy) into a material flux (carbon). Information on this rate is contained in photosynthesis parameters: the initial slope and the assimilation number. The exactness of parameter values is crucial for precise calculation of primary production. Here we use a model of the daily production profile based on a suite of photosynthesis irradiance functions and extract photosynthesis parameters from in situ measured daily production profiles at the Hawaii Ocean Time-series station Aloha. For each function we recover parameter values, establish parameter distributions and quantify model skill. We observe that the choice of the photosynthesis irradiance function to estimate the photosynthesis parameters affects the magnitudes of parameter values as recovered from in situ profiles. We also tackle the problem of parameter exchange amongst the models and the effect it has on model performance. All models displayed little or no bias prior to parameter exchange, but significant bias following parameter exchange. The best model performance resulted from using optimal parameter values. Model formulation was extended further by accounting for spectral effects and deriving a spectral analytical solution for the daily production profile. The daily production profile was also formulated with time dependent growing biomass governed by a growth equation. The work on parameter recovery was further extended by exploring how to extract photosynthesis parameters from information on watercolumn production. It was demonstrated how to estimate parameter values based on a linearization of the full analytical solution for normalized watercolumn production and from the solution itself, without linearization. The paper complements previous works on photosynthesis irradiance models by analysing the skill and consistency of

  6. Spherical Model Integrating Academic Competence with Social Adjustment and Psychopathology.

    Science.gov (United States)

    Schaefer, Earl S.; And Others

    This study replicates and elaborates a three-dimensional, spherical model that integrates research findings concerning social and emotional behavior, psychopathology, and academic competence. Kindergarten teachers completed an extensive set of rating scales on 100 children, including the Classroom Behavior Inventory and the Child Adaptive Behavior…

  7. R.M. Solow Adjusted Model of Economic Growth

    Directory of Open Access Journals (Sweden)

    Ion Gh. Rosca

    2007-05-01

    The analysis part of the model is based on the study of the equilibrium to the continuous case with some interpretations of the discreet one, by using the state diagram. The optimization problem at the economic level is also used; it is built up of a specified number of representative consumers and firms in order to reveal the interaction between these elements.

  8. The influence of pH adjustment on kinetics parameters in tapioca wastewater treatment using aerobic sequencing batch reactor system

    Science.gov (United States)

    Mulyani, Happy; Budianto, Gregorius Prima Indra; Margono, Kaavessina, Mujtahid

    2018-02-01

    The present investigation deals with the aerobic sequencing batch reactor system of tapioca wastewater treatment with varying pH influent conditions. This project was carried out to evaluate the effect of pH on kinetics parameters of system. It was done by operating aerobic sequencing batch reactor system during 8 hours in many tapioca wastewater conditions (pH 4.91, pH 7, pH 8). The Chemical Oxygen Demand (COD) and Mixed Liquor Volatile Suspended Solids (MLVSS) of the aerobic sequencing batch reactor system effluent at steady state condition were determined at interval time of two hours to generate data for substrate inhibition kinetics parameters. Values of the kinetics constants were determined using Monod and Andrews models. There was no inhibition constant (Ki) detected in all process variation of aerobic sequencing batch reactor system for tapioca wastewater treatment in this study. Furthermore, pH 8 was selected as the preferred aerobic sequencing batch reactor system condition in those ranging pH investigated due to its achievement of values of kinetics parameters such µmax = 0.010457/hour and Ks = 255.0664 mg/L COD.

  9. Study on Gas Field Optimization Distribution with Parameters Adjustment of the Air Duct Outlet for Mechanized Heading Face in Coal Mine

    Science.gov (United States)

    Gong, Xiao-Yan; Zhang, Xin-Yi; Wu, Yue; Xia, Zhi-Xin; Li, Ying

    2017-12-01

    At present, as the increasingly drilling dimensions with cross-section expansion and distance prolong in coal mine, the situation of gas accumulation in mechanized heading face becomes severe. In this paper, optimization research of gas distribution was carried out by adjusting parameters of the air duct outlet, including angle, caliber and the front and rear distance of air duct outlet. Mechanized heading face of Ningtiaota coal mine was taken as the research object, simulated and analyzed the problems of original gas field, the reasonable parameters range of the air duct outlet was determined according to the allowable range of wind speed and the effect of gas dilution, the adjustment range of each parameter of the air duct outlet is preliminarily determined. Base on this, the distribution of gas field under different parameters adjustment of air duct outlet was simulated. The specific parameters under the different distance between the air duct outlet and the mechanized heading face were obtained, and a new method of optimizing the gas distribution by adjusting parameters of the air duct outlet was provided.

  10. Assessing parameter importance of the Common Land Model based on qualitative and quantitative sensitivity analysis

    Directory of Open Access Journals (Sweden)

    J. Li

    2013-08-01

    Full Text Available Proper specification of model parameters is critical to the performance of land surface models (LSMs. Due to high dimensionality and parameter interaction, estimating parameters of an LSM is a challenging task. Sensitivity analysis (SA is a tool that can screen out the most influential parameters on model outputs. In this study, we conducted parameter screening for six output fluxes for the Common Land Model: sensible heat, latent heat, upward longwave radiation, net radiation, soil temperature and soil moisture. A total of 40 adjustable parameters were considered. Five qualitative SA methods, including local, sum-of-trees, multivariate adaptive regression splines, delta test and Morris methods, were compared. The proper sampling design and sufficient sample size necessary to effectively screen out the sensitive parameters were examined. We found that there are 2–8 sensitive parameters, depending on the output type, and about 400 samples are adequate to reliably identify the most sensitive parameters. We also employed a revised Sobol' sensitivity method to quantify the importance of all parameters. The total effects of the parameters were used to assess the contribution of each parameter to the total variances of the model outputs. The results confirmed that global SA methods can generally identify the most sensitive parameters effectively, while local SA methods result in type I errors (i.e., sensitive parameters labeled as insensitive or type II errors (i.e., insensitive parameters labeled as sensitive. Finally, we evaluated and confirmed the screening results for their consistency with the physical interpretation of the model parameters.

  11. Optimizing incomplete sample designs for item response model parameters

    NARCIS (Netherlands)

    van der Linden, Willem J.

    Several models for optimizing incomplete sample designs with respect to information on the item parameters are presented. The following cases are considered: (1) known ability parameters; (2) unknown ability parameters; (3) item sets with multiple ability scales; and (4) response models with

  12. Adjusting the specificity of an engine map based on the sensitivity of an engine control parameter relative to a performance variable

    Science.gov (United States)

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2014-10-28

    Methods and systems for engine control optimization are provided. A first and a second operating condition of a vehicle engine are detected. An initial value is identified for a first and a second engine control parameter corresponding to a combination of the detected operating conditions according to a first and a second engine map look-up table. The initial values for the engine control parameters are adjusted based on a detected engine performance variable to cause the engine performance variable to approach a target value. A first and a second sensitivity of the engine performance variable are determined in response to changes in the engine control parameters. The first engine map look-up table is adjusted when the first sensitivity is greater than a threshold, and the second engine map look-up table is adjusted when the second sensitivity is greater than a threshold.

  13. Parameter Estimates in Differential Equation Models for Chemical Kinetics

    Science.gov (United States)

    Winkel, Brian

    2011-01-01

    We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…

  14. Study on Parameters Modeling of Wind Turbines Using SCADA Data

    Directory of Open Access Journals (Sweden)

    Yonglong YAN

    2014-08-01

    Full Text Available Taking the advantage of the current massive monitoring data from Supervisory Control and Data Acquisition (SCADA system of wind farm, it is of important significance for anomaly detection, early warning and fault diagnosis to build the data model of state parameters of wind turbines (WTs. The operational conditions and the relationships between the state parameters of wind turbines are complex. It is difficult to establish the model of state parameter accurately, and the modeling method of state parameters of wind turbines considering parameter selection is proposed. Firstly, by analyzing the characteristic of SCADA data, a reasonable range of data and monitoring parameters are chosen. Secondly, neural network algorithm is adapted, and the selection method of input parameters in the model is presented. Generator bearing temperature and cooling air temperature are regarded as target parameters, and the two models are built and input parameters of the models are selected, respectively. Finally, the parameter selection method in this paper and the method using genetic algorithm-partial least square (GA-PLS are analyzed comparatively, and the results show that the proposed methods are correct and effective. Furthermore, the modeling of two parameters illustrate that the method in this paper can applied to other state parameters of wind turbines.

  15. Re-estimating temperature-dependent consumption parameters in bioenergetics models for juvenile Chinook salmon

    Science.gov (United States)

    Plumb, John M.; Moffitt, Christine M.

    2015-01-01

    Researchers have cautioned against the borrowing of consumption and growth parameters from other species and life stages in bioenergetics growth models. In particular, the function that dictates temperature dependence in maximum consumption (Cmax) within the Wisconsin bioenergetics model for Chinook Salmon Oncorhynchus tshawytscha produces estimates that are lower than those measured in published laboratory feeding trials. We used published and unpublished data from laboratory feeding trials with subyearling Chinook Salmon from three stocks (Snake, Nechako, and Big Qualicum rivers) to estimate and adjust the model parameters for temperature dependence in Cmax. The data included growth measures in fish ranging from 1.5 to 7.2 g that were held at temperatures from 14°C to 26°C. Parameters for temperature dependence in Cmax were estimated based on relative differences in food consumption, and bootstrapping techniques were then used to estimate the error about the parameters. We found that at temperatures between 17°C and 25°C, the current parameter values did not match the observed data, indicating that Cmax should be shifted by about 4°C relative to the current implementation under the bioenergetics model. We conclude that the adjusted parameters for Cmax should produce more accurate predictions from the bioenergetics model for subyearling Chinook Salmon.

  16. Parameter Optimisation for the Behaviour of Elastic Models over Time

    DEFF Research Database (Denmark)

    Mosegaard, Jesper

    2004-01-01

    Optimisation of parameters for elastic models is essential for comparison or finding equivalent behaviour of elastic models when parameters cannot simply be transferred or converted. This is the case with a large range of commonly used elastic models. In this paper we present a general method tha...

  17. An automatic and effective parameter optimization method for model tuning

    Directory of Open Access Journals (Sweden)

    T. Zhang

    2015-11-01

    simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  18. Identifying the connective strength between model parameters and performance criteria

    Directory of Open Access Journals (Sweden)

    B. Guse

    2017-11-01

    Full Text Available In hydrological models, parameters are used to represent the time-invariant characteristics of catchments and to capture different aspects of hydrological response. Hence, model parameters need to be identified based on their role in controlling the hydrological behaviour. For the identification of meaningful parameter values, multiple and complementary performance criteria are used that compare modelled and measured discharge time series. The reliability of the identification of hydrologically meaningful model parameter values depends on how distinctly a model parameter can be assigned to one of the performance criteria. To investigate this, we introduce the new concept of connective strength between model parameters and performance criteria. The connective strength assesses the intensity in the interrelationship between model parameters and performance criteria in a bijective way. In our analysis of connective strength, model simulations are carried out based on a latin hypercube sampling. Ten performance criteria including Nash–Sutcliffe efficiency (NSE, Kling–Gupta efficiency (KGE and its three components (alpha, beta and r as well as RSR (the ratio of the root mean square error to the standard deviation for different segments of the flow duration curve (FDC are calculated. With a joint analysis of two regression tree (RT approaches, we derive how a model parameter is connected to different performance criteria. At first, RTs are constructed using each performance criterion as the target variable to detect the most relevant model parameters for each performance criterion. Secondly, RTs are constructed using each parameter as the target variable to detect which performance criteria are impacted by changes in the values of one distinct model parameter. Based on this, appropriate performance criteria are identified for each model parameter. In this study, a high bijective connective strength between model parameters and performance criteria

  19. Resuspension parameters for TRAC dispersion model

    International Nuclear Information System (INIS)

    Langer, G.

    1987-01-01

    Resuspension factors for the wind erosion of soil contaminated with plutonium are necessary to run the Rocky Flats Plant Terrain Responsive Atmospheric Code (TRAC). The model predicts the dispersion and resulting population dose due to accidental plutonium releases

  20. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models.

    Directory of Open Access Journals (Sweden)

    Jonathan R Karr

    2015-05-01

    Full Text Available Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.

  1. An evaluation of bias in propensity score-adjusted non-linear regression models.

    Science.gov (United States)

    Wan, Fei; Mitra, Nandita

    2018-03-01

    Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model.

  2. Modeling Influenza Transmission Using Environmental Parameters

    Science.gov (United States)

    Soebiyanto, Radina P.; Kiang, Richard K.

    2010-01-01

    Influenza is an acute viral respiratory disease that has significant mortality, morbidity and economic burden worldwide. It infects approximately 5-15% of the world population, and causes 250,000 500,000 deaths each year. The role of environments on influenza is often drawn upon the latitude variability of influenza seasonality pattern. In regions with temperate climate, influenza epidemics exhibit clear seasonal pattern that peak during winter months, but it is not as evident in the tropics. Toward this end, we developed mathematical model and forecasting capabilities for influenza in regions characterized by warm climate Hong Kong (China) and Maricopa County (Arizona, USA). The best model for Hong Kong uses Land Surface Temperature (LST), precipitation and relative humidity as its covariates. Whereas for Maricopa County, we found that weekly influenza cases can be best modelled using mean air temperature as its covariates. Our forecasts can further guides public health organizations in targeting influenza prevention and control measures such as vaccination.

  3. Edge Modeling by Two Blur Parameters in Varying Contrasts.

    Science.gov (United States)

    Seo, Suyoung

    2018-06-01

    This paper presents a method of modeling edge profiles with two blur parameters, and estimating and predicting those edge parameters with varying brightness combinations and camera-to-object distances (COD). First, the validity of the edge model is proven mathematically. Then, it is proven experimentally with edges from a set of images captured for specifically designed target sheets and with edges from natural images. Estimation of the two blur parameters for each observed edge profile is performed with a brute-force method to find parameters that produce global minimum errors. Then, using the estimated blur parameters, actual blur parameters of edges with arbitrary brightness combinations are predicted using a surface interpolation method (i.e., kriging). The predicted surfaces show that the two blur parameters of the proposed edge model depend on both dark-side edge brightness and light-side edge brightness following a certain global trend. This is similar across varying CODs. The proposed edge model is compared with a one-blur parameter edge model using experiments of the root mean squared error for fitting the edge models to each observed edge profile. The comparison results suggest that the proposed edge model has superiority over the one-blur parameter edge model in most cases where edges have varying brightness combinations.

  4. Adjusting the Stems Regional Forest Growth Model to Improve Local Predictions

    Science.gov (United States)

    W. Brad Smith

    1983-01-01

    A simple procedure using double sampling is described for adjusting growth in the STEMS regional forest growth model to compensate for subregional variations. Predictive accuracy of the STEMS model (a distance-independent, individual tree growth model for Lake States forests) was improved by using this procedure

  5. Modeling phosphorus in the Lake Allatoona watershed using SWAT: I. Developing phosphorus parameter values.

    Science.gov (United States)

    Radcliffe, D E; Lin, Z; Risse, L M; Romeis, J J; Jackson, C R

    2009-01-01

    Lake Allatoona is a large reservoir north of Atlanta, GA, that drains an area of about 2870 km2 scheduled for a phosphorus (P) total maximum daily load (TMDL). The Soil and Water Assessment Tool (SWAT) model has been widely used for watershed-scale modeling of P, but there is little guidance on how to estimate P-related parameters, especially those related to in-stream P processes. In this paper, methods are demonstrated to individually estimate SWAT soil-related P parameters and to collectively estimate P parameters related to stream processes. Stream related parameters were obtained using the nutrient uptake length concept. In a manner similar to experiments conducted by stream ecologists, a small point source is simulated in a headwater sub-basin of the SWAT models, then the in-stream parameter values are adjusted collectively to get an uptake length of P similar to the values measured in the streams in the region. After adjusting the in-stream parameters, the P uptake length estimated in the simulations ranged from 53 to 149 km compared to uptake lengths measured by ecologists in the region of 11 to 85 km. Once the a priori P-related parameter set was developed, the SWAT models of main tributaries to Lake Allatoona were calibrated for daily transport. Models using SWAT P parameters derived from the methods in this paper outperformed models using default parameter values when predicting total P (TP) concentrations in streams during storm events and TP annual loads to Lake Allatoona.

  6. Dynamics in the Parameter Space of a Neuron Model

    Science.gov (United States)

    Paulo, C. Rech

    2012-06-01

    Some two-dimensional parameter-space diagrams are numerically obtained by considering the largest Lyapunov exponent for a four-dimensional thirteen-parameter Hindmarsh—Rose neuron model. Several different parameter planes are considered, and it is shown that depending on the combination of parameters, a typical scenario can be preserved: for some choice of two parameters, the parameter plane presents a comb-shaped chaotic region embedded in a large periodic region. It is also shown that there exist regions close to these comb-shaped chaotic regions, separated by the comb teeth, organizing themselves in period-adding bifurcation cascades.

  7. Health economic modeling of the potential cost saving effects of Neurally Adjusted Ventilator Assist.

    Science.gov (United States)

    Hjelmgren, Jonas; Bruce Wirta, Sara; Huetson, Pernilla; Myrén, Karl-Johan; Göthberg, Sylvia

    2016-02-01

    Asynchrony between patient and ventilator breaths is associated with increased duration of mechanical ventilation (MV). Neurally Adjusted Ventilatory Assist (NAVA) controls MV through an esophageal reading of diaphragm electrical activity via a nasogastric tube mounted with electrode rings. NAVA has been shown to decrease asynchrony in comparison to pressure support ventilation (PSV). The objective of this study was to conduct a health economic evaluation of NAVA compared with PSV. We developed a model based on an indirect link between improved synchrony with NAVA versus PSV and fewer days spent on MV in synchronous patients. Unit costs for MV were obtained from the Swedish intensive care unit register, and used in the model along with NAVA-specific costs. The importance of each parameter (proportion of asynchronous patients, costs, and average MV duration) for the overall results was evaluated through sensitivity analyses. Base case results showed that 21% of patients ventilated with NAVA were asynchronous versus 52% of patients receiving PSV. This equals an absolute difference of 31% and an average of 1.7 days less on MV and a total cost saving of US$7886 (including NAVA catheter costs). A breakeven analysis suggested that NAVA was cost effective compared with PSV given an absolute difference in the proportion of asynchronous patients greater than 2.5% (49.5% versus 52% asynchronous patients with NAVA and PSV, respectively). The base case results were stable to changes in parameters, such as difference in asynchrony, duration of ventilation and daily intensive care unit costs. This study showed economically favorable results for NAVA versus PSV. Our results show that only a minor decrease in the proportion of asynchronous patients with NAVA is needed for investments to pay off and generate savings. Future studies need to confirm this result by directly relating improved synchrony to the number of days on MV. © The Author(s), 2015.

  8. Nonlinear adaptive synchronization rule for identification of a large amount of parameters in dynamical models

    International Nuclear Information System (INIS)

    Ma Huanfei; Lin Wei

    2009-01-01

    The existing adaptive synchronization technique based on the stability theory and invariance principle of dynamical systems, though theoretically proved to be valid for parameters identification in specific models, is always showing slow convergence rate and even failed in practice when the number of parameters becomes large. Here, for parameters update, a novel nonlinear adaptive rule is proposed to accelerate the rate. Its feasibility is validated by analytical arguments as well as by specific parameters identification in the Lotka-Volterra model with multiple species. Two adjustable factors in this rule influence the identification accuracy, which means that a proper choice of these factors leads to an optimal performance of this rule. In addition, a feasible method for avoiding the occurrence of the approximate linear dependence among terms with parameters on the synchronized manifold is also proposed.

  9. Advances in Modelling, System Identification and Parameter ...

    Indian Academy of Sciences (India)

    Authors show, using numerical simulation for two system functions, the improvement in percentage normalized ... of nonlinear systems. The approach is to use multiple linearizing models fitted along the operating trajectories. ... over emphasized in the light of present day high level of research activity in the field of aerospace ...

  10. Modeling and Control of the Redundant Parallel Adjustment Mechanism on a Deployable Antenna Panel

    Directory of Open Access Journals (Sweden)

    Lili Tian

    2016-10-01

    Full Text Available With the aim of developing multiple input and multiple output (MIMO coupling systems with a redundant parallel adjustment mechanism on the deployable antenna panel, a structural control integrated design methodology is proposed in this paper. Firstly, the modal information from the finite element model of the structure of the antenna panel is extracted, and then the mathematical model is established with the Hamilton principle; Secondly, the discrete Linear Quadratic Regulator (LQR controller is added to the model in order to control the actuators and adjust the shape of the panel. Finally, the engineering practicality of the modeling and control method based on finite element analysis simulation is verified.

  11. Agricultural and Environmental Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Kaylie Rasmuson; Kurt Rautenstrauch

    2003-01-01

    This analysis is one of nine technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. It documents input parameters for the biosphere model, and supports the use of the model to develop Biosphere Dose Conversion Factors (BDCF). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in the biosphere Technical Work Plan (TWP, BSC 2003a). It should be noted that some documents identified in Figure 1-1 may be under development and therefore not available at the time this document is issued. The ''Biosphere Model Report'' (BSC 2003b) describes the ERMYN and its input parameters. This analysis report, ANL-MGR-MD-000006, ''Agricultural and Environmental Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. This report defines and justifies values for twelve parameters required in the biosphere model. These parameters are related to use of contaminated groundwater to grow crops. The parameter values recommended in this report are used in the soil, plant, and carbon-14 submodels of the ERMYN

  12. Steps in the construction and verification of an explanatory model of psychosocial adjustment

    Directory of Open Access Journals (Sweden)

    Arantzazu Rodríguez-Fernández

    2016-06-01

    Full Text Available The aim of the present study was to empirically test an explanatory model of psychosocial adjustment during adolescence, with psychosocial adjustment during this stage being understood as a combination of school adjustment (or school engagement and subjective well-being. According to the hypothetic model, psychosocial adjustment depends on self-concept and resilience, which in turn act as mediators of the influence of perceived social support (from family, peers and teachers on this adjustment. Participants were 1250 secondary school students (638 girls and 612 boys aged between 12 and 15 years (Mean = 13.72; SD = 1.09. The results provided evidence of: (a the influence of all three types of perceived support on subject resilience and self-concept, with perceived family support being particularly important in this respect; (b the influence of the support received from teachers on school adjustment and support received from the family on psychological wellbeing; and (c the absence of any direct influence of peer support on psychosocial adjustment, although indirect influence was observed through the psychological variables studied. These results are discussed from an educational perspective and in terms of future research.

  13. Steps in the construction and verification of an explanatory model of psychosocial adjustment

    Directory of Open Access Journals (Sweden)

    Arantzazu Rodríguez-Fernández

    2016-06-01

    Full Text Available The aim of the present study was to empirically test an explanatory model of psychosocial adjustment during adolescence, with psychosocial adjustment during this stage being understood as a combination of school adjustment (or school engagement and subjective well-being. According to the hypothetic model, psychosocial adjustment depends on self-concept and resilience, which in turn act as mediators of the influence of perceived social support (from family, peers and teachers on this adjustment. Participants were 1250 secondary school students (638 girls and 612 boys aged between 12 and 15 years (Mean = 13.72; SD = 1.09. The results provided evidence of: (a the influence of all three types of perceived support on subject resilience and self-concept, with perceived family support being particularly important in this respect; (b the influence of the support received from teachers on school adjustment and support received from the family on psychological wellbeing; and (c the absence of any direct influence of peer support on psychosocial adjustment, although indirect influence was observed through the psychological variables studied. These results are discussed from an educational perspective and in terms of future research

  14. Implication of Mauk Nursing Rehabilitation Model on Adjustment of Stroke Patients

    Directory of Open Access Journals (Sweden)

    Zeinab Ebrahimpour mouziraji

    2014-12-01

    Full Text Available Objectives: Stroke is a neurological syndrome with sudden onset or gradual destruction of brain vessels, which may take 24 hours or more. Complications of stroke effect in the variation aspects of the individual. According to De Spulveda and Chang’s Studies, disability reduced the effective adjustment. This study aimed to overview the adjustment of stroke patients based on the main concepts of rehabilitation nursing Mauk model. Methods: In a quasi-experimental one group pre-posttest design study, data was collected in the neurology clinic of Imam Khomeini hospital and stroke patient rehabilitation centers in Tehran (Tabassom. Data collection included demographic and adjustment questionnaires of stroke patients. The intervention included seven sessions as Mauk model, each session with one hour training, for seven patients. Data analysis performed with SPSS software with paired t-test and was compared with previous results. Results: There were significant differences between the mean scores of patients with stroke adjustment questionnaire in the pre-test-post-test. But in the adjustment sub-scales, except for relationship with wife and Personal adjustment, in other areas, there is no statistically significant difference between the pre and posttest. Discussion: The results indicated that training has been affected on some aspects of adjustment of stroke patients in order to, as improving functions, complications and its limitations. Nurses can help then with implementing of plans such as patients education in this regard.

  15. A simulation of water pollution model parameter estimation

    Science.gov (United States)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  16. Lumped parameter models for the interpretation of environmental tracer data

    International Nuclear Information System (INIS)

    Maloszewski, P.; Zuber, A.

    1996-01-01

    Principles of the lumped-parameter approach to the interpretation of environmental tracer data are given. The following models are considered: the piston flow model (PFM), exponential flow model (EM), linear model (LM), combined piston flow and exponential flow model (EPM), combined linear flow and piston flow model (LPM), and dispersion model (DM). The applicability of these models for the interpretation of different tracer data is discussed for a steady state flow approximation. Case studies are given to exemplify the applicability of the lumped-parameter approach. Description of a user-friendly computer program is given. (author). 68 refs, 25 figs, 4 tabs

  17. Lumped parameter models for the interpretation of environmental tracer data

    Energy Technology Data Exchange (ETDEWEB)

    Maloszewski, P [GSF-Inst. for Hydrology, Oberschleissheim (Germany); Zuber, A [Institute of Nuclear Physics, Cracow (Poland)

    1996-10-01

    Principles of the lumped-parameter approach to the interpretation of environmental tracer data are given. The following models are considered: the piston flow model (PFM), exponential flow model (EM), linear model (LM), combined piston flow and exponential flow model (EPM), combined linear flow and piston flow model (LPM), and dispersion model (DM). The applicability of these models for the interpretation of different tracer data is discussed for a steady state flow approximation. Case studies are given to exemplify the applicability of the lumped-parameter approach. Description of a user-friendly computer program is given. (author). 68 refs, 25 figs, 4 tabs.

  18. Parameter identification of ZnO surge arrester models based on genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Bayadi, Abdelhafid [Laboratoire d' Automatique de Setif, Departement d' Electrotechnique, Faculte des Sciences de l' Ingenieur, Universite Ferhat ABBAS de Setif, Route de Bejaia Setif 19000 (Algeria)

    2008-07-15

    The correct and adequate modelling of ZnO surge arresters characteristics is very important for insulation coordination studies and systems reliability. In this context many researchers addressed considerable efforts to the development of surge arresters models to reproduce the dynamic characteristics observed in their behaviour when subjected to fast front impulse currents. The difficulties with these models reside essentially in the calculation and the adjustment of their parameters. This paper proposes a new technique based on genetic algorithm to obtain the best possible series of parameter values of ZnO surge arresters models. The validity of the predicted parameters is then checked by comparing the predicted results with the experimental results available in the literature. Using the ATP-EMTP package, an application of the arrester model on network system studies is presented and discussed. (author)

  19. Parameters modelling of amaranth grain processing technology

    Science.gov (United States)

    Derkanosova, N. M.; Shelamova, S. A.; Ponomareva, I. N.; Shurshikova, G. V.; Vasilenko, O. A.

    2018-03-01

    The article presents a technique that allows calculating the structure of a multicomponent bakery mixture for the production of enriched products, taking into account the instability of nutrient content, and ensuring the fulfilment of technological requirements and, at the same time considering consumer preferences. The results of modelling and analysis of optimal solutions are given by the example of calculating the structure of a three-component mixture of wheat and rye flour with an enriching component, that is, whole-hulled amaranth flour applied to the technology of bread from a mixture of rye and wheat flour on a liquid leaven.

  20. WATGIS: A GIS-Based Lumped Parameter Water Quality Model

    Science.gov (United States)

    Glenn P. Fernandez; George M. Chescheir; R. Wayne Skaggs; Devendra M. Amatya

    2002-01-01

    A Geographic Information System (GIS)­based, lumped parameter water quality model was developed to estimate the spatial and temporal nitrogen­loading patterns for lower coastal plain watersheds in eastern North Carolina. The model uses a spatially distributed delivery ratio (DR) parameter to account for nitrogen retention or loss along a drainage network. Delivery...

  1. A test for the parameters of multiple linear regression models ...

    African Journals Online (AJOL)

    A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...

  2. The influence of model parameters on catchment-response

    International Nuclear Information System (INIS)

    Shah, S.M.S.; Gabriel, H.F.; Khan, A.A.

    2002-01-01

    This paper deals with the study of influence of influence of conceptual rainfall-runoff model parameters on catchment response (runoff). A conceptual modified watershed yield model is employed to study the effects of model-parameters on catchment-response, i.e. runoff. The model is calibrated, using manual parameter-fitting approach, also known as trial and error parameter-fitting. In all, there are twenty one (21) parameters that control the functioning of the model. A lumped parametric approach is used. The detailed analysis was performed on Ling River near Kahuta, having catchment area of 56 sq. miles. The model includes physical parameters like GWSM, PETS, PGWRO, etc. fitting coefficients like CINF, CGWS, etc. and initial estimates of the surface-water and groundwater storages i.e. srosp and gwsp. Sensitivity analysis offers a good way, without repetititious computations, the proper weight and consideration that must be taken when each of the influencing factor is evaluated. Sensitivity-analysis was performed to evaluate the influence of model-parameters on runoff. The sensitivity and relative contributions of model parameters influencing catchment-response are studied. (author)

  3. Identification of ecosystem parameters by SDE-modelling

    DEFF Research Database (Denmark)

    Stochastic differential equations (SDEs) for ecosystem modelling have attracted increasing attention during recent years. The modelling has mostly been through simulation experiments in order to analyse how system noise propagates through the ordinary differential equation formulation of ecosystem...... models. Estimation of parameters in SDEs is, however, possible by combining Kalman filter techniques and likelihood estimation. By modelling parameters as random walks it is possible to identify linear as well as non-linear interactions between ecosystem components. By formulating a simple linear SDE...

  4. Regionalization of SWAT Model Parameters for Use in Ungauged Watersheds

    Directory of Open Access Journals (Sweden)

    Indrajeet Chaubey

    2010-11-01

    Full Text Available There has been a steady shift towards modeling and model-based approaches as primary methods of assessing watershed response to hydrologic inputs and land management, and of quantifying watershed-wide best management practice (BMP effectiveness. Watershed models often require some degree of calibration and validation to achieve adequate watershed and therefore BMP representation. This is, however, only possible for gauged watersheds. There are many watersheds for which there are very little or no monitoring data available, thus the question as to whether it would be possible to extend and/or generalize model parameters obtained through calibration of gauged watersheds to ungauged watersheds within the same region. This study explored the possibility of developing regionalized model parameter sets for use in ungauged watersheds. The study evaluated two regionalization methods: global averaging, and regression-based parameters, on the SWAT model using data from priority watersheds in Arkansas. Resulting parameters were tested and model performance determined on three gauged watersheds. Nash-Sutcliffe efficiencies (NS for stream flow obtained using regression-based parameters (0.53–0.83 compared well with corresponding values obtained through model calibration (0.45–0.90. Model performance obtained using global averaged parameter values was also generally acceptable (0.4 ≤ NS ≤ 0.75. Results from this study indicate that regionalized parameter sets for the SWAT model can be obtained and used for making satisfactory hydrologic response predictions in ungauged watersheds.

  5. An improved state-parameter analysis of ecosystem models using data assimilation

    Science.gov (United States)

    Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.

    2008-01-01

    Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the

  6. Bayesian estimation of parameters in a regional hydrological model

    Directory of Open Access Journals (Sweden)

    K. Engeland

    2002-01-01

    Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis

  7. Brownian motion model with stochastic parameters for asset prices

    Science.gov (United States)

    Ching, Soo Huei; Hin, Pooi Ah

    2013-09-01

    The Brownian motion model may not be a completely realistic model for asset prices because in real asset prices the drift μ and volatility σ may change over time. Presently we consider a model in which the parameter x = (μ,σ) is such that its value x (t + Δt) at a short time Δt ahead of the present time t depends on the value of the asset price at time t + Δt as well as the present parameter value x(t) and m-1 other parameter values before time t via a conditional distribution. The Malaysian stock prices are used to compare the performance of the Brownian motion model with fixed parameter with that of the model with stochastic parameter.

  8. On Approaches to Analyze the Sensitivity of Simulated Hydrologic Fluxes to Model Parameters in the Community Land Model

    Directory of Open Access Journals (Sweden)

    Jie Bao

    2015-12-01

    Full Text Available Effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash–Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA approaches, including analysis of variance based on the generalized linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.

  9. Integral parameters for the Godiva benchmark calculated by using theoretical and adjusted fission spectra of 235U

    International Nuclear Information System (INIS)

    Caldeira, A.D.

    1987-05-01

    The theoretical and adjusted Watt spectrum representations for 235 U are used as weighting functions to calculate K eff and θ f 28 /θ f 25 for the benchmark Godiva. The results obtained show that the values of K eff and θ f 28 /θ f 25 are not affected by spectrum form change. (author) [pt

  10. Modeling and Dynamic Simulation of the Adjust and Control System Mechanism for Reactor CAREM-25

    International Nuclear Information System (INIS)

    Larreteguy, A.E; Mazufri, C.M

    2000-01-01

    The adjust and control system mechanism, MSAC, is an advanced, and in some senses unique, hydromechanical device.The efforts in modeling this mechanism are aimed to: Get a deep understanding of the physical phenomena involved,Identify the set of parameters relevant to the dynamics of the system,Allow the numerical simulation of the system,Predict the behavior of the mechanism in conditions other than that obtainable within the range of operation of the experimental setup (CEM), and Help in defining the design of the CAPEM (loop for testing the mechanism under high pressure/high temperature conditions).Thanks to the close interaction between the mechanics, the experimenters, and the modelists that compose the MSAC task force, it has been possible to suggest improvements, not only in the design of the mechanism, but also in the design and the operation of the pulse generator (GDP) and the rest of the CEM.This effort has led to a design mature enough so as to be tested in a high-pressure loop

  11. Homoclinic connections and subcritical Neimark bifurcation in a duopoly model with adaptively adjusted productions

    International Nuclear Information System (INIS)

    Agliari, Anna

    2006-01-01

    In this paper we study some global bifurcations arising in the Puu's oligopoly model when we assume that the producers do not adjust to the best reply but use an adaptive process to obtain at each step the new production. Such bifurcations cause the appearance of a pair of closed invariant curves, one attracting and one repelling, this latter being involved in the subcritical Neimark bifurcation of the Cournot equilibrium point. The aim of the paper is to highlight the relationship between the global bifurcations causing the appearance/disappearance of two invariant closed curves and the homoclinic connections of some saddle cycle, already conjectured in [Agliari A, Gardini L, Puu T. Some global bifurcations related to the appearance of closed invariant curves. Comput Math Simul 2005;68:201-19]. We refine the results obtained in such a paper, showing that the appearance/disappearance of closed invariant curves is not necessarily related to the existence of an attracting cycle. The characterization of the periodicity tongues (i.e. a region of the parameter space in which an attracting cycle exists) associated with a subcritical Neimark bifurcation is also discussed

  12. Determination of the Corona model parameters with artificial neural networks

    International Nuclear Information System (INIS)

    Ahmet, Nayir; Bekir, Karlik; Arif, Hashimov

    2005-01-01

    Full text : The aim of this study is to calculate new model parameters taking into account the corona of electrical transmission line wires. For this purpose, a neural network modeling proposed for the corona frequent characteristics modeling. Then this model was compared with the other model developed at the Polytechnic Institute of Saint Petersburg. The results of development of the specified corona model for calculation of its influence on the wave processes in multi-wires line and determination of its parameters are submitted. Results of obtained calculation equations are brought for electrical transmission line with allowance for superficial effect in the ground and wires with reference to developed corona model

  13. Biological parameters for lung cancer in mathematical models of carcinogenesis

    International Nuclear Information System (INIS)

    Jacob, P.; Jacob, V.

    2003-01-01

    Applications of the two-step model of carcinogenesis with clonal expansion (TSCE) to lung cancer data are reviewed, including those on atomic bomb survivors from Hiroshima and Nagasaki, British doctors, Colorado Plateau miners, and Chinese tin miners. Different sets of identifiable model parameters are used in the literature. The parameter set which could be determined with the lowest uncertainty consists of the net proliferation rate gamma of intermediate cells, the hazard h 55 at an intermediate age, and the hazard H? at an asymptotically large age. Also, the values of these three parameters obtained in the various studies are more consistent than other identifiable combinations of the biological parameters. Based on representative results for these three parameters, implications for the biological parameters in the TSCE model are derived. (author)

  14. Learning about physical parameters: the importance of model discrepancy

    International Nuclear Information System (INIS)

    Brynjarsdóttir, Jenný; O'Hagan, Anthony

    2014-01-01

    Science-based simulation models are widely used to predict the behavior of complex physical systems. It is also common to use observations of the physical system to solve the inverse problem, that is, to learn about the values of parameters within the model, a process which is often called calibration. The main goal of calibration is usually to improve the predictive performance of the simulator but the values of the parameters in the model may also be of intrinsic scientific interest in their own right. In order to make appropriate use of observations of the physical system it is important to recognize model discrepancy, the difference between reality and the simulator output. We illustrate through a simple example that an analysis that does not account for model discrepancy may lead to biased and over-confident parameter estimates and predictions. The challenge with incorporating model discrepancy in statistical inverse problems is being confounded with calibration parameters, which will only be resolved with meaningful priors. For our simple example, we model the model-discrepancy via a Gaussian process and demonstrate that through accounting for model discrepancy our prediction within the range of data is correct. However, only with realistic priors on the model discrepancy do we uncover the true parameter values. Through theoretical arguments we show that these findings are typical of the general problem of learning about physical parameters and the underlying physical system using science-based mechanistic models. (paper)

  15. Spatio-temporal modeling of nonlinear distributed parameter systems

    CERN Document Server

    Li, Han-Xiong

    2011-01-01

    The purpose of this volume is to provide a brief review of the previous work on model reduction and identifi cation of distributed parameter systems (DPS), and develop new spatio-temporal models and their relevant identifi cation approaches. In this book, a systematic overview and classifi cation on the modeling of DPS is presented fi rst, which includes model reduction, parameter estimation and system identifi cation. Next, a class of block-oriented nonlinear systems in traditional lumped parameter systems (LPS) is extended to DPS, which results in the spatio-temporal Wiener and Hammerstein s

  16. The combined geodetic network adjusted on the reference ellipsoid – a comparison of three functional models for GNSS observations

    Directory of Open Access Journals (Sweden)

    Kadaj Roman

    2016-12-01

    Full Text Available The adjustment problem of the so-called combined (hybrid, integrated network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients. While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional

  17. Modeling Quality-Adjusted Life Expectancy Loss Resulting from Tobacco Use in the United States

    Science.gov (United States)

    Kaplan, Robert M.; Anderson, John P.; Kaplan, Cameron M.

    2007-01-01

    Purpose: To describe the development of a model for estimating the effects of tobacco use upon Quality Adjusted Life Years (QALYs) and to estimate the impact of tobacco use on health outcomes for the United States (US) population using the model. Method: We obtained estimates of tobacco consumption from 6 years of the National Health Interview…

  18. Evaluation of the Stress Adjustment and Adaptation Model among Families Reporting Economic Pressure

    Science.gov (United States)

    Vandsburger, Etty; Biggerstaff, Marilyn A.

    2004-01-01

    This research evaluates the Stress Adjustment and Adaptation Model (double ABCX model) examining the effects resiliency resources on family functioning when families experience economic pressure. Families (N = 128) with incomes at or below the poverty line from a rural area of a southern state completed measures of perceived economic pressure,…

  19. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  20. Universally sloppy parameter sensitivities in systems biology models.

    Directory of Open Access Journals (Sweden)

    Ryan N Gutenkunst

    2007-10-01

    Full Text Available Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  1. Universally sloppy parameter sensitivities in systems biology models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-10-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  2. Parameter estimation of variable-parameter nonlinear Muskingum model using excel solver

    Science.gov (United States)

    Kang, Ling; Zhou, Liwei

    2018-02-01

    Abstract . The Muskingum model is an effective flood routing technology in hydrology and water resources Engineering. With the development of optimization technology, more and more variable-parameter Muskingum models were presented to improve effectiveness of the Muskingum model in recent decades. A variable-parameter nonlinear Muskingum model (NVPNLMM) was proposed in this paper. According to the results of two real and frequently-used case studies by various models, the NVPNLMM could obtain better values of evaluation criteria, which are used to describe the superiority of the estimated outflows and compare the accuracies of flood routing using various models, and the optimal estimated outflows by the NVPNLMM were closer to the observed outflows than the ones by other models.

  3. Modeling and Parameter Estimation of a Small Wind Generation System

    Directory of Open Access Journals (Sweden)

    Carlos A. Ramírez Gómez

    2013-11-01

    Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.

  4. NONLINEAR PLANT PIECEWISE-CONTINUOUS MODEL MATRIX PARAMETERS ESTIMATION

    Directory of Open Access Journals (Sweden)

    Roman L. Leibov

    2017-09-01

    Full Text Available This paper presents a nonlinear plant piecewise-continuous model matrix parameters estimation technique using nonlinear model time responses and random search method. One of piecewise-continuous model application areas is defined. The results of proposed approach application for aircraft turbofan engine piecewisecontinuous model formation are presented

  5. Emotional closeness to parents and grandparents: A moderated mediation model predicting adolescent adjustment.

    Science.gov (United States)

    Attar-Schwartz, Shalhevet

    2015-09-01

    Warm and emotionally close relationships with parents and grandparents have been found in previous studies to be linked with better adolescent adjustment. The present study, informed by Family Systems Theory and Intergenerational Solidarity Theory, uses a moderated mediation model analyzing the contribution of the dynamics of these intergenerational relationships to adolescent adjustment. Specifically, it examines the mediating role of emotional closeness to the closest grandparent in the relationship between emotional closeness to a parent (the offspring of the closest grandparent) and adolescent adjustment difficulties. The model also examines the moderating role of emotional closeness to parents in the relationship between emotional closeness to grandparents and adjustment difficulties. The study was based on a sample of 1,405 Jewish Israeli secondary school students (ages 12-18) who completed a structured questionnaire. It was found that emotional closeness to the closest grandparent was more strongly associated with reduced adjustment difficulties among adolescents with higher levels of emotional closeness to their parents. In addition, adolescent adjustment and emotional closeness to parents was partially mediated by emotional closeness to grandparents. Examining the family conditions under which adolescents' relationships with grandparents is stronger and more beneficial for them can help elucidate variations in grandparent-grandchild ties and expand our understanding of the mechanisms that shape child outcomes. (c) 2015 APA, all rights reserved).

  6. A self-adaptive genetic algorithm to estimate JA model parameters considering minor loops

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Hai-liang; Wen, Xi-shan; Lan, Lei; An, Yun-zhu; Li, Xiao-ping

    2015-01-15

    A self-adaptive genetic algorithm for estimating Jiles–Atherton (JA) magnetic hysteresis model parameters is presented. The fitness function is established based on the distances between equidistant key points of normalized hysteresis loops. Linearity function and logarithm function are both adopted to code the five parameters of JA model. Roulette wheel selection is used and the selection pressure is adjusted adaptively by deducting a proportional which depends on current generation common value. The Crossover operator is established by combining arithmetic crossover and multipoint crossover. Nonuniform mutation is improved by adjusting the mutation ratio adaptively. The algorithm is used to estimate the parameters of one kind of silicon-steel sheet’s hysteresis loops, and the results are in good agreement with published data. - Highlights: • We present a method to find JA parameters for both major and minor loops. • Fitness function is based on distances between key points of normalized loops. • The selection pressure is adjusted adaptively based on generations.

  7. A self-adaptive genetic algorithm to estimate JA model parameters considering minor loops

    International Nuclear Information System (INIS)

    Lu, Hai-liang; Wen, Xi-shan; Lan, Lei; An, Yun-zhu; Li, Xiao-ping

    2015-01-01

    A self-adaptive genetic algorithm for estimating Jiles–Atherton (JA) magnetic hysteresis model parameters is presented. The fitness function is established based on the distances between equidistant key points of normalized hysteresis loops. Linearity function and logarithm function are both adopted to code the five parameters of JA model. Roulette wheel selection is used and the selection pressure is adjusted adaptively by deducting a proportional which depends on current generation common value. The Crossover operator is established by combining arithmetic crossover and multipoint crossover. Nonuniform mutation is improved by adjusting the mutation ratio adaptively. The algorithm is used to estimate the parameters of one kind of silicon-steel sheet’s hysteresis loops, and the results are in good agreement with published data. - Highlights: • We present a method to find JA parameters for both major and minor loops. • Fitness function is based on distances between key points of normalized loops. • The selection pressure is adjusted adaptively based on generations

  8. Identification of parameters of discrete-continuous models

    International Nuclear Information System (INIS)

    Cekus, Dawid; Warys, Pawel

    2015-01-01

    In the paper, the parameters of a discrete-continuous model have been identified on the basis of experimental investigations and formulation of optimization problem. The discrete-continuous model represents a cantilever stepped Timoshenko beam. The mathematical model has been formulated and solved according to the Lagrange multiplier formalism. Optimization has been based on the genetic algorithm. The presented proceeding’s stages make the identification of any parameters of discrete-continuous systems possible

  9. Identification of parameters of discrete-continuous models

    Energy Technology Data Exchange (ETDEWEB)

    Cekus, Dawid, E-mail: cekus@imipkm.pcz.pl; Warys, Pawel, E-mail: warys@imipkm.pcz.pl [Institute of Mechanics and Machine Design Foundations, Czestochowa University of Technology, Dabrowskiego 73, 42-201 Czestochowa (Poland)

    2015-03-10

    In the paper, the parameters of a discrete-continuous model have been identified on the basis of experimental investigations and formulation of optimization problem. The discrete-continuous model represents a cantilever stepped Timoshenko beam. The mathematical model has been formulated and solved according to the Lagrange multiplier formalism. Optimization has been based on the genetic algorithm. The presented proceeding’s stages make the identification of any parameters of discrete-continuous systems possible.

  10. Parameter estimation in stochastic rainfall-runoff models

    DEFF Research Database (Denmark)

    Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur

    2006-01-01

    A parameter estimation method for stochastic rainfall-runoff models is presented. The model considered in the paper is a conceptual stochastic model, formulated in continuous-discrete state space form. The model is small and a fully automatic optimization is, therefore, possible for estimating all...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...

  11. Risk adjustment models for short-term outcomes after surgical resection for oesophagogastric cancer.

    Science.gov (United States)

    Fischer, C; Lingsma, H; Hardwick, R; Cromwell, D A; Steyerberg, E; Groene, O

    2016-01-01

    Outcomes for oesophagogastric cancer surgery are compared with the aim of benchmarking quality of care. Adjusting for patient characteristics is crucial to avoid biased comparisons between providers. The study objective was to develop a case-mix adjustment model for comparing 30- and 90-day mortality and anastomotic leakage rates after oesophagogastric cancer resections. The study reviewed existing models, considered expert opinion and examined audit data in order to select predictors that were consequently used to develop a case-mix adjustment model for the National Oesophago-Gastric Cancer Audit, covering England and Wales. Models were developed on patients undergoing surgical resection between April 2011 and March 2013 using logistic regression. Model calibration and discrimination was quantified using a bootstrap procedure. Most existing risk models for oesophagogastric resections were methodologically weak, outdated or based on detailed laboratory data that are not generally available. In 4882 patients with oesophagogastric cancer used for model development, 30- and 90-day mortality rates were 2·3 and 4·4 per cent respectively, and 6·2 per cent of patients developed an anastomotic leak. The internally validated models, based on predictors selected from the literature, showed moderate discrimination (area under the receiver operating characteristic (ROC) curve 0·646 for 30-day mortality, 0·664 for 90-day mortality and 0·587 for anastomotic leakage) and good calibration. Based on available data, three case-mix adjustment models for postoperative outcomes in patients undergoing curative surgery for oesophagogastric cancer were developed. These models should be used for risk adjustment when assessing hospital performance in the National Health Service, and tested in other large health systems. © 2015 BJS Society Ltd Published by John Wiley & Sons Ltd.

  12. Conference Innovations in Derivatives Market : Fixed Income Modeling, Valuation Adjustments, Risk Management, and Regulation

    CERN Document Server

    Grbac, Zorana; Scherer, Matthias; Zagst, Rudi

    2016-01-01

    This book presents 20 peer-reviewed chapters on current aspects of derivatives markets and derivative pricing. The contributions, written by leading researchers in the field as well as experienced authors from the financial industry, present the state of the art in: • Modeling counterparty credit risk: credit valuation adjustment, debit valuation adjustment, funding valuation adjustment, and wrong way risk. • Pricing and hedging in fixed-income markets and multi-curve interest-rate modeling. • Recent developments concerning contingent convertible bonds, the measuring of basis spreads, and the modeling of implied correlations. The recent financial crisis has cast tremendous doubts on the classical view on derivative pricing. Now, counterparty credit risk and liquidity issues are integral aspects of a prudent valuation procedure and the reference interest rates are represented by a multitude of curves according to their different periods and maturities. A panel discussion included in the book (featuring D...

  13. Some tests for parameter constancy in cointegrated VAR-models

    DEFF Research Database (Denmark)

    Hansen, Henrik; Johansen, Søren

    1999-01-01

    Some methods for the evaluation of parameter constancy in vector autoregressive (VAR) models are discussed. Two different ways of re-estimating the VAR model are proposed; one in which all parameters are estimated recursively based upon the likelihood function for the first observations, and anot...... be applied to test the constancy of the long-run parameters in the cointegrated VAR-model. All results are illustrated using a model for the term structure of interest rates on US Treasury securities. ......Some methods for the evaluation of parameter constancy in vector autoregressive (VAR) models are discussed. Two different ways of re-estimating the VAR model are proposed; one in which all parameters are estimated recursively based upon the likelihood function for the first observations......, and another in which the cointegrating relations are estimated recursively from a likelihood function, where the short-run parameters have been concentrated out. We suggest graphical procedures based on recursively estimated eigenvalues to evaluate the constancy of the long-run parameters in the model...

  14. Incorporating model parameter uncertainty into inverse treatment planning

    International Nuclear Information System (INIS)

    Lian Jun; Xing Lei

    2004-01-01

    Radiobiological treatment planning depends not only on the accuracy of the models describing the dose-response relation of different tumors and normal tissues but also on the accuracy of tissue specific radiobiological parameters in these models. Whereas the general formalism remains the same, different sets of model parameters lead to different solutions and thus critically determine the final plan. Here we describe an inverse planning formalism with inclusion of model parameter uncertainties. This is made possible by using a statistical analysis-based frameset developed by our group. In this formalism, the uncertainties of model parameters, such as the parameter a that describes tissue-specific effect in the equivalent uniform dose (EUD) model, are expressed by probability density function and are included in the dose optimization process. We found that the final solution strongly depends on distribution functions of the model parameters. Considering that currently available models for computing biological effects of radiation are simplistic, and the clinical data used to derive the models are sparse and of questionable quality, the proposed technique provides us with an effective tool to minimize the effect caused by the uncertainties in a statistical sense. With the incorporation of the uncertainties, the technique has potential for us to maximally utilize the available radiobiology knowledge for better IMRT treatment

  15. Adjusting a cancer mortality-prediction model for disease status-related eligibility criteria

    Directory of Open Access Journals (Sweden)

    Kimmel Marek

    2011-05-01

    Full Text Available Abstract Background Volunteering participants in disease studies tend to be healthier than the general population partially due to specific enrollment criteria. Using modeling to accurately predict outcomes of cohort studies enrolling volunteers requires adjusting for the bias introduced in this way. Here we propose a new method to account for the effect of a specific form of healthy volunteer bias resulting from imposing disease status-related eligibility criteria, on disease-specific mortality, by explicitly modeling the length of the time interval between the moment when the subject becomes ineligible for the study, and the outcome. Methods Using survival time data from 1190 newly diagnosed lung cancer patients at MD Anderson Cancer Center, we model the time from clinical lung cancer diagnosis to death using an exponential distribution to approximate the length of this interval for a study where lung cancer death serves as the outcome. Incorporating this interval into our previously developed lung cancer risk model, we adjust for the effect of disease status-related eligibility criteria in predicting the number of lung cancer deaths in the control arm of CARET. The effect of the adjustment using the MD Anderson-derived approximation is compared to that based on SEER data. Results Using the adjustment developed in conjunction with our existing lung cancer model, we are able to accurately predict the number of lung cancer deaths observed in the control arm of CARET. Conclusions The resulting adjustment was accurate in predicting the lower rates of disease observed in the early years while still maintaining reasonable prediction ability in the later years of the trial. This method could be used to adjust for, or predict the duration and relative effect of any possible biases related to disease-specific eligibility criteria in modeling studies of volunteer-based cohorts.

  16. A method for model identification and parameter estimation

    International Nuclear Information System (INIS)

    Bambach, M; Heinkenschloss, M; Herty, M

    2013-01-01

    We propose and analyze a new method for the identification of a parameter-dependent model that best describes a given system. This problem arises, for example, in the mathematical modeling of material behavior where several competing constitutive equations are available to describe a given material. In this case, the models are differential equations that arise from the different constitutive equations, and the unknown parameters are coefficients in the constitutive equations. One has to determine the best-suited constitutive equations for a given material and application from experiments. We assume that the true model is one of the N possible parameter-dependent models. To identify the correct model and the corresponding parameters, we can perform experiments, where for each experiment we prescribe an input to the system and observe a part of the system state. Our approach consists of two stages. In the first stage, for each pair of models we determine the experiment, i.e. system input and observation, that best differentiates between the two models, and measure the distance between the two models. Then we conduct N(N − 1) or, depending on the approach taken, N(N − 1)/2 experiments and use the result of the experiments as well as the previously computed model distances to determine the true model. We provide sufficient conditions on the model distances and measurement errors which guarantee that our approach identifies the correct model. Given the model, we identify the corresponding model parameters in the second stage. The problem in the second stage is a standard parameter estimation problem and we use a method suitable for the given application. We illustrate our approach on three examples, including one where the models are elliptic partial differential equations with different parameterized right-hand sides and an example where we identify the constitutive equation in a problem from computational viscoplasticity. (paper)

  17. Modelling hydrodynamic parameters to predict flow assisted corrosion

    International Nuclear Information System (INIS)

    Poulson, B.; Greenwell, B.; Chexal, B.; Horowitz, J.

    1992-01-01

    During the past 15 years, flow assisted corrosion has been a worldwide problem in the power generating industry. The phenomena is complex and depends on environment, material composition, and hydrodynamic factors. Recently, modeling of flow assisted corrosion has become a subject of great importance. A key part of this effort is modeling the hydrodynamic aspects of this issue. This paper examines which hydrodynamic parameter should be used to correlate the occurrence and rate of flow assisted corrosion with physically meaningful parameters, discusses ways of measuring the relevant hydrodynamic parameter, and describes how the hydrodynamic data is incorporated into the predictive model

  18. Droop Control with an Adjustable Complex Virtual Impedance Loop based on Cloud Model Theory

    DEFF Research Database (Denmark)

    Li, Yan; Shuai, Zhikang; Xu, Qinming

    2016-01-01

    Droop control framework with an adjustable virtual impedance loop is proposed in this paper, which is based on the cloud model theory. The proposed virtual impedance loop includes two terms: a negative virtual resistor and an adjustable virtual inductance. The negative virtual resistor term...... sometimes. The cloud model theory is applied to get online the changing line impedance value, which relies on the relevance of the reactive power responding the changing line impedance. The verification of the proposed control strategy is done according to the simulation in a low voltage microgrid in Matlab....

  19. Contact angle adjustment in equation-of-state-based pseudopotential model.

    Science.gov (United States)

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.

  20. A distributed approach for parameters estimation in System Biology models

    International Nuclear Information System (INIS)

    Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.

    2009-01-01

    Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.

  1. Optimal parameters for the FFA-Beddoes dynamic stall model

    Energy Technology Data Exchange (ETDEWEB)

    Bjoerck, A; Mert, M [FFA, The Aeronautical Research Institute of Sweden, Bromma (Sweden); Madsen, H A [Risoe National Lab., Roskilde (Denmark)

    1999-03-01

    Unsteady aerodynamic effects, like dynamic stall, must be considered in calculation of dynamic forces for wind turbines. Models incorporated in aero-elastic programs are of semi-empirical nature. Resulting aerodynamic forces therefore depend on values used for the semi-empiricial parameters. In this paper a study of finding appropriate parameters to use with the Beddoes-Leishman model is discussed. Minimisation of the `tracking error` between results from 2D wind tunnel tests and simulation with the model is used to find optimum values for the parameters. The resulting optimum parameters show a large variation from case to case. Using these different sets of optimum parameters in the calculation of blade vibrations, give rise to quite different predictions of aerodynamic damping which is discussed. (au)

  2. Parameter estimation for LLDPE gas-phase reactor models

    Directory of Open Access Journals (Sweden)

    G. A. Neumann

    2007-06-01

    Full Text Available Product development and advanced control applications require models with good predictive capability. However, in some cases it is not possible to obtain good quality phenomenological models due to the lack of data or the presence of important unmeasured effects. The use of empirical models requires less investment in modeling, but implies the need for larger amounts of experimental data to generate models with good predictive capability. In this work, nonlinear phenomenological and empirical models were compared with respect to their capability to predict the melt index and polymer yield of a low-density polyethylene production process consisting of two fluidized bed reactors connected in series. To adjust the phenomenological model, the optimization algorithms based on the flexible polyhedron method of Nelder and Mead showed the best efficiency. To adjust the empirical model, the PLS model was more appropriate for polymer yield, and the melt index needed more nonlinearity like the QPLS models. In the comparison between these two types of models better results were obtained for the empirical models.

  3. Online State Space Model Parameter Estimation in Synchronous Machines

    Directory of Open Access Journals (Sweden)

    Z. Gallehdari

    2014-06-01

    The suggested approach is evaluated for a sample synchronous machine model. Estimated parameters are tested for different inputs at different operating conditions. The effect of noise is also considered in this study. Simulation results show that the proposed approach provides good accuracy for parameter estimation.

  4. Using multilevel modeling to assess case-mix adjusters in consumer experience surveys in health care.

    Science.gov (United States)

    Damman, Olga C; Stubbe, Janine H; Hendriks, Michelle; Arah, Onyebuchi A; Spreeuwenberg, Peter; Delnoij, Diana M J; Groenewegen, Peter P

    2009-04-01

    Ratings on the quality of healthcare from the consumer's perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for analyzing healthcare performance data, it has rarely been used to assess case-mix adjustment of such data. The purpose of this article is to investigate whether multilevel regression analysis is a useful tool to detect case-mix adjusters in consumer assessment of healthcare. We used data on 11,539 consumers from 27 Dutch health plans, which were collected using the Dutch Consumer Quality Index health plan instrument. We conducted multilevel regression analyses of consumers' responses nested within health plans to assess the effects of consumer characteristics on consumer experience. We compared our findings to the results of another methodology: the impact factor approach, which combines the predictive effect of each case-mix variable with its heterogeneity across health plans. Both multilevel regression and impact factor analyses showed that age and education were the most important case-mix adjusters for consumer experience and ratings of health plans. With the exception of age, case-mix adjustment had little impact on the ranking of health plans. On both theoretical and practical grounds, multilevel modeling is useful for adequate case-mix adjustment and analysis of performance ratings.

  5. Complexity, parameter sensitivity and parameter transferability in the modelling of floodplain inundation

    Science.gov (United States)

    Bates, P. D.; Neal, J. C.; Fewtrell, T. J.

    2012-12-01

    In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound

  6. Retrospective forecast of ETAS model with daily parameters estimate

    Science.gov (United States)

    Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang

    2016-04-01

    We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.

  7. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Kaylie Rasmuson; Kurt Rautenstrauch

    2003-06-20

    This analysis is one of nine technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. It documents input parameters for the biosphere model, and supports the use of the model to develop Biosphere Dose Conversion Factors (BDCF). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in the biosphere Technical Work Plan (TWP, BSC 2003a). It should be noted that some documents identified in Figure 1-1 may be under development and therefore not available at the time this document is issued. The ''Biosphere Model Report'' (BSC 2003b) describes the ERMYN and its input parameters. This analysis report, ANL-MGR-MD-000006, ''Agricultural and Environmental Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. This report defines and justifies values for twelve parameters required in the biosphere model. These parameters are related to use of contaminated groundwater to grow crops. The parameter values recommended in this report are used in the soil, plant, and carbon-14 submodels of the ERMYN.

  8. Variational assimilation of streamflow into operational distributed hydrologic models: effect of spatiotemporal adjustment scale

    Science.gov (United States)

    Lee, H.; Seo, D.-J.; Liu, Y.; Koren, V.; McKee, P.; Corby, R.

    2012-01-01

    State updating of distributed rainfall-runoff models via streamflow assimilation is subject to overfitting because large dimensionality of the state space of the model may render the assimilation problem seriously under-determined. To examine the issue in the context of operational hydrology, we carry out a set of real-world experiments in which streamflow data is assimilated into gridded Sacramento Soil Moisture Accounting (SAC-SMA) and kinematic-wave routing models of the US National Weather Service (NWS) Research Distributed Hydrologic Model (RDHM) with the variational data assimilation technique. Study basins include four basins in Oklahoma and five basins in Texas. To assess the sensitivity of data assimilation performance to dimensionality reduction in the control vector, we used nine different spatiotemporal adjustment scales, where state variables are adjusted in a lumped, semi-distributed, or distributed fashion and biases in precipitation and potential evaporation (PE) are adjusted hourly, 6-hourly, or kept time-invariant. For each adjustment scale, three different streamflow assimilation scenarios are explored, where streamflow observations at basin interior points, at the basin outlet, or at both interior points and the outlet are assimilated. The streamflow assimilation experiments with nine different basins show that the optimum spatiotemporal adjustment scale varies from one basin to another and may be different for streamflow analysis and prediction in all of the three streamflow assimilation scenarios. The most preferred adjustment scale for seven out of nine basins is found to be the distributed, hourly scale, despite the fact that several independent validation results at this adjustment scale indicated the occurrence of overfitting. Basins with highly correlated interior and outlet flows tend to be less sensitive to the adjustment scale and could benefit more from streamflow assimilation. In comparison to outlet flow assimilation, interior flow

  9. Parameter Estimates in Differential Equation Models for Population Growth

    Science.gov (United States)

    Winkel, Brian J.

    2011-01-01

    We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…

  10. The analysis and realization of state-matrix parameters of a digital adjuster used for accelerator power system

    International Nuclear Information System (INIS)

    Long Yindong; Zhao Long; Chinese Academy of Sciences, Beijing; Qiao Weimin; Jing Lan

    2008-01-01

    The power-control system of synchronized accelerator must be a high definition and real time system. In this paper we introduced a platform for parameter analysis and realization. As a necessary part of the digital adjustor, it is achieved by the method of state-space equation based on the Cyclone II for improving performance of the accelerator power-control system. With the platform, the object control in a precision range of up to 10 64 can be realized, depending on the optimum arithmetic and the MATLAB functions. In addition, considering the actual environment, the parameters the actual demands can be better met by choosing the parameters, providing a necessary support to achieve the higher definition and better real time adjustor. (authors)

  11. Uncertainty in dual permeability model parameters for structured soils

    Science.gov (United States)

    Arora, B.; Mohanty, B. P.; McGuire, J. T.

    2012-01-01

    Successful application of dual permeability models (DPM) to predict contaminant transport is contingent upon measured or inversely estimated soil hydraulic and solute transport parameters. The difficulty in unique identification of parameters for the additional macropore- and matrix-macropore interface regions, and knowledge about requisite experimental data for DPM has not been resolved to date. Therefore, this study quantifies uncertainty in dual permeability model parameters of experimental soil columns with different macropore distributions (single macropore, and low- and high-density multiple macropores). Uncertainty evaluation is conducted using adaptive Markov chain Monte Carlo (AMCMC) and conventional Metropolis-Hastings (MH) algorithms while assuming 10 out of 17 parameters to be uncertain or random. Results indicate that AMCMC resolves parameter correlations and exhibits fast convergence for all DPM parameters while MH displays large posterior correlations for various parameters. This study demonstrates that the choice of parameter sampling algorithms is paramount in obtaining unique DPM parameters when information on covariance structure is lacking, or else additional information on parameter correlations must be supplied to resolve the problem of equifinality of DPM parameters. This study also highlights the placement and significance of matrix-macropore interface in flow experiments of soil columns with different macropore densities. Histograms for certain soil hydraulic parameters display tri-modal characteristics implying that macropores are drained first followed by the interface region and then by pores of the matrix domain in drainage experiments. Results indicate that hydraulic properties and behavior of the matrix-macropore interface is not only a function of saturated hydraulic conductivity of the macroporematrix interface (Ksa) and macropore tortuosity (lf) but also of other parameters of the matrix and macropore domains.

  12. Testing a social ecological model for relations between political violence and child adjustment in Northern Ireland.

    Science.gov (United States)

    Cummings, E Mark; Merrilees, Christine E; Schermerhorn, Alice C; Goeke-Morey, Marcie C; Shirlow, Peter; Cairns, Ed

    2010-05-01

    Relations between political violence and child adjustment are matters of international concern. Past research demonstrates the significance of community, family, and child psychological processes in child adjustment, supporting study of interrelations between multiple social ecological factors and child adjustment in contexts of political violence. Testing a social ecological model, 300 mothers and their children (M = 12.28 years, SD = 1.77) from Catholic and Protestant working class neighborhoods in Belfast, Northern Ireland, completed measures of community discord, family relations, and children's regulatory processes (i.e., emotional security) and outcomes. Historical political violence in neighborhoods based on objective records (i.e., politically motivated deaths) were related to family members' reports of current sectarian antisocial behavior and nonsectarian antisocial behavior. Interparental conflict and parental monitoring and children's emotional security about both the community and family contributed to explanatory pathways for relations between sectarian antisocial behavior in communities and children's adjustment problems. The discussion evaluates support for social ecological models for relations between political violence and child adjustment and its implications for understanding relations in other parts of the world.

  13. Agricultural and Environmental Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    K. Rasmuson; K. Rautenstrauch

    2004-01-01

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters

  14. Real-time adjustment of pressure to demand in water distribution systems: Parameter-less P-controller algorithm

    CSIR Research Space (South Africa)

    Page, Philip R

    2016-08-01

    Full Text Available Remote real-time control is currently the most advanced form of pressure management. Here the parameters describing pressure control valves (or pumps) are changed in real-time in such a way to provide the most optimal pressure in the water...

  15. On-board adaptive model for state of charge estimation of lithium-ion batteries based on Kalman filter with proportional integral-based error adjustment

    Science.gov (United States)

    Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai

    2017-10-01

    With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.

  16. Determining extreme parameter correlation in ground water models

    DEFF Research Database (Denmark)

    Hill, Mary Cole; Østerby, Ole

    2003-01-01

    can go undetected even by experienced modelers. Extreme parameter correlation can be detected using parameter correlation coefficients, but their utility depends on the presence of sufficient, but not excessive, numerical imprecision of the sensitivities, such as round-off error. This work...... investigates the information that can be obtained from parameter correlation coefficients in the presence of different levels of numerical imprecision, and compares it to the information provided by an alternative method called the singular value decomposition (SVD). Results suggest that (1) calculated...... correlation coefficients with absolute values that round to 1.00 were good indicators of extreme parameter correlation, but smaller values were not necessarily good indicators of lack of correlation and resulting unique parameter estimates; (2) the SVD may be more difficult to interpret than parameter...

  17. Uncertainty of Modal Parameters Estimated by ARMA Models

    DEFF Research Database (Denmark)

    Jensen, Jacob Laigaard; Brincker, Rune; Rytter, Anders

    1990-01-01

    In this paper the uncertainties of identified modal parameters such as eidenfrequencies and damping ratios are assed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the parameters...... by simulation study of a lightly damped single degree of freedom system. Identification by ARMA models has been choosen as system identification method. It is concluded that both the sampling interval and number of sampled points may play a significant role with respect to the statistical errors. Furthermore......, it is shown that the model errors may also contribute significantly to the uncertainty....

  18. A Threshold Model of Social Support, Adjustment, and Distress after Breast Cancer Treatment

    Science.gov (United States)

    Mallinckrodt, Brent; Armer, Jane M.; Heppner, P. Paul

    2012-01-01

    This study examined a threshold model that proposes that social support exhibits a curvilinear association with adjustment and distress, such that support in excess of a critical threshold level has decreasing incremental benefits. Women diagnosed with a first occurrence of breast cancer (N = 154) completed survey measures of perceived support…

  19. Development of a model for case-mix adjustment of pressure ulcer prevalence rates.

    NARCIS (Netherlands)

    Bours, G.J.J.W.; Halfens, J.; Berger, M.P.; Abu-Saad, H.H.; Grol, R.P.T.M.

    2003-01-01

    BACKGROUND: Acute care hospitals participating in the Dutch national pressure ulcer prevalence survey use the results of this survey to compare their outcomes and assess their quality of care regarding pressure ulcer prevention. The development of a model for case-mix adjustment is essential for the

  20. 10 km running performance predicted by a multiple linear regression model with allometrically adjusted variables.

    Science.gov (United States)

    Abad, Cesar C C; Barros, Ronaldo V; Bertuzzi, Romulo; Gagliardi, João F L; Lima-Silva, Adriano E; Lambert, Mike I; Pires, Flavio O

    2016-06-01

    The aim of this study was to verify the power of VO 2max , peak treadmill running velocity (PTV), and running economy (RE), unadjusted or allometrically adjusted, in predicting 10 km running performance. Eighteen male endurance runners performed: 1) an incremental test to exhaustion to determine VO 2max and PTV; 2) a constant submaximal run at 12 km·h -1 on an outdoor track for RE determination; and 3) a 10 km running race. Unadjusted (VO 2max , PTV and RE) and adjusted variables (VO 2max 0.72 , PTV 0.72 and RE 0.60 ) were investigated through independent multiple regression models to predict 10 km running race time. There were no significant correlations between 10 km running time and either the adjusted or unadjusted VO 2max . Significant correlations (p 0.84 and power > 0.88. The allometrically adjusted predictive model was composed of PTV 0.72 and RE 0.60 and explained 83% of the variance in 10 km running time with a standard error of the estimate (SEE) of 1.5 min. The unadjusted model composed of a single PVT accounted for 72% of the variance in 10 km running time (SEE of 1.9 min). Both regression models provided powerful estimates of 10 km running time; however, the unadjusted PTV may provide an uncomplicated estimation.

  1. Towards an Integrated Conceptual Model of International Student Adjustment and Adaptation

    Science.gov (United States)

    Schartner, Alina; Young, Tony Johnstone

    2016-01-01

    Despite a burgeoning body of empirical research on "the international student experience", the area remains under-theorized. The literature to date lacks a guiding conceptual model that captures the adjustment and adaptation trajectories of this unique, growing, and important sojourner group. In this paper, we therefore put forward a…

  2. A Four-Part Model of Autonomy during Emerging Adulthood: Associations with Adjustment

    Science.gov (United States)

    Lamborn, Susie D.; Groh, Kelly

    2009-01-01

    We found support for a four-part model of autonomy that links connectedness, separation, detachment, and agency to adjustment during emerging adulthood. Based on self-report surveys of 285 American college students, expected associations among the autonomy variables were found. In addition, agency, as measured by self-reliance, predicted lower…

  3. Testing an Attachment Model of Latina/o College Students' Psychological Adjustment

    Science.gov (United States)

    Garriott, Patton O.; Love, Keisha M.; Tyler, Kenneth M.; Thomas, Deneia M.; Roan-Belle, Clarissa R.; Brown, Carrie L.

    2010-01-01

    The present study examined the influence of attachment relationships on the psychological adjustment of Latina/o university students (N = 80) attending predominantly White institutions of higher education. A path analysis conducted to test a hypothesized model of parent and peer attachment, self-esteem, and psychological distress indicated that…

  4. SPOTting Model Parameters Using a Ready-Made Python Package.

    Directory of Open Access Journals (Sweden)

    Tobias Houska

    Full Text Available The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool, an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI. We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  5. Parameter resolution in two models for cell survival after radiation

    International Nuclear Information System (INIS)

    Di Cera, E.; Andreasi Bassi, F.; Arcovito, G.

    1989-01-01

    The resolvability of model parameters for the linear-quadratic and the repair-misrepair models for cell survival after radiation has been studied by Monte Carlo simulations as a function of the number of experimental data points collected in a given dose range and the experimental error. Statistical analysis of the results reveals the range of experimental conditions under which the model parameters can be resolved with sufficient accuracy, and points out some differences in the operational aspects of the two models. (orig.)

  6. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  7. Updating parameters of the chicken processing line model

    DEFF Research Database (Denmark)

    Kurowicka, Dorota; Nauta, Maarten; Jozwiak, Katarzyna

    2010-01-01

    A mathematical model of chicken processing that quantitatively describes the transmission of Campylobacter on chicken carcasses from slaughter to chicken meat product has been developed in Nauta et al. (2005). This model was quantified with expert judgment. Recent availability of data allows...... updating parameters of the model to better describe processes observed in slaughterhouses. We propose Bayesian updating as a suitable technique to update expert judgment with microbiological data. Berrang and Dickens’s data are used to demonstrate performance of this method in updating parameters...... of the chicken processing line model....

  8. Lumped-parameter Model of a Bucket Foundation

    DEFF Research Database (Denmark)

    Andersen, Lars; Ibsen, Lars Bo; Liingaard, Morten

    2009-01-01

    efficient model that can be applied in aero-elastic codes for fast evaluation of the dynamic structural response of wind turbines. The target solutions, utilised for calibration of the lumped-parameter models, are obtained by a coupled finite-element/boundaryelement scheme in the frequency domain......, and the quality of the models are tested in the time and frequency domains. It is found that precise results are achieved by lumped-parameter models with two to four internal degrees of freedom per displacement or rotation of the foundation. Further, coupling between the horizontal sliding and rocking cannot...

  9. Lumped-Parameter Models for Windturbine Footings on Layered Ground

    DEFF Research Database (Denmark)

    Andersen, Lars

    The design of modern wind turbines is typically based on lifetime analyses using aeroelastic codes. In this regard, the impedance of the foundations must be described accurately without increasing the overall size of the computationalmodel significantly. This may be obtained by the fitting...... of a lumped-parameter model to the results of a rigorous model or experimental results. In this paper, guidelines are given for the formulation of such lumped-parameter models and examples are given in which the models are utilised for the analysis of a wind turbine supported by a surface footing on a layered...

  10. MODELING OF FUEL SPRAY CHARACTERISTICS AND DIESEL COMBUSTION CHAMBER PARAMETERS

    Directory of Open Access Journals (Sweden)

    G. M. Kukharonak

    2011-01-01

    Full Text Available The computer model for coordination of fuel spray characteristics with diesel combustion chamber parameters has been created in the paper.  The model allows to observe fuel sprays  develоpment in diesel cylinder at any moment of injection, to calculate characteristics of fuel sprays with due account of a shape and dimensions of a combustion chamber, timely to change fuel injection characteristics and supercharging parameters, shape and dimensions of a combustion chamber. Moreover the computer model permits to determine parameters of holes in an injector nozzle that provides the required fuel sprays characteristics at the stage of designing a diesel engine. Combustion chamber parameters for 4ЧН11/12.5 diesel engine have been determined in the paper.

  11. Rational Multi-curve Models with Counterparty-risk Valuation Adjustments

    DEFF Research Database (Denmark)

    Crépey, Stéphane; Macrina, Andrea; Nguyen, Tuyet Mai

    2016-01-01

    We develop a multi-curve term structure set-up in which the modelling ingredients are expressed by rational functionals of Markov processes. We calibrate to London Interbank Offer Rate swaptions data and show that a rational two-factor log-normal multi-curve model is sufficient to match market da...... with regulatory obligations. In order to compute counterparty-risk valuation adjustments, such as credit valuation adjustment, we show how default intensity processes with rational form can be derived. We flesh out our study by applying the results to a basis swap contract....... with accuracy. We elucidate the relationship between the models developed and calibrated under a risk-neutral measure Q and their consistent equivalence class under the real-world probability measure P. The consistent P-pricing models are applied to compute the risk exposures which may be required to comply...

  12. Seasonal and spatial variation in broadleaf forest model parameters

    Science.gov (United States)

    Groenendijk, M.; van der Molen, M. K.; Dolman, A. J.

    2009-04-01

    Process based, coupled ecosystem carbon, energy and water cycle models are used with the ultimate goal to project the effect of future climate change on the terrestrial carbon cycle. A typical dilemma in such exercises is how much detail the model must be given to describe the observations reasonably realistic while also be general. We use a simple vegetation model (5PM) with five model parameters to study the variability of the parameters. These parameters are derived from the observed carbon and water fluxes from the FLUXNET database. For 15 broadleaf forests the model parameters were derived for different time resolutions. It appears that in general for all forests, the correlation coefficient between observed and simulated carbon and water fluxes improves with a higher parameter time resolution. The quality of the simulations is thus always better when a higher time resolution is used. These results show that annual parameters are not capable of properly describing weather effects on ecosystem fluxes, and that two day time resolution yields the best results. A first indication of the climate constraints can be found by the seasonal variation of the covariance between Jm, which describes the maximum electron transport for photosynthesis, and climate variables. A general seasonality we found is that during winter the covariance with all climate variables is zero. Jm increases rapidly after initial spring warming, resulting in a large covariance with air temperature and global radiation. During summer Jm is less variable, but co-varies negatively with air temperature and vapour pressure deficit and positively with soil water content. A temperature response appears during spring and autumn for broadleaf forests. This shows that an annual model parameter cannot be representative for the entire year. And relations with mean annual temperature are not possible. During summer the photosynthesis parameters are constrained by water availability, soil water content and

  13. An Iterative Optimization Algorithm for Lens Distortion Correction Using Two-Parameter Models

    Directory of Open Access Journals (Sweden)

    Daniel Santana-Cedrés

    2016-12-01

    Full Text Available We present a method for the automatic estimation of two-parameter radial distortion models, considering polynomial as well as division models. The method first detects the longest distorted lines within the image by applying the Hough transform enriched with a radial distortion parameter. From these lines, the first distortion parameter is estimated, then we initialize the second distortion parameter to zero and the two-parameter model is embedded into an iterative nonlinear optimization process to improve the estimation. This optimization aims at reducing the distance from the edge points to the lines, adjusting two distortion parameters as well as the coordinates of the center of distortion. Furthermore, this allows detecting more points belonging to the distorted lines, so that the Hough transform is iteratively repeated to extract a better set of lines until no improvement is achieved. We present some experiments on real images with significant distortion to show the ability of the proposed approach to automatically correct this type of distortion as well as a comparison between the polynomial and division models.

  14. Large signal S-parameters: modeling and radiation effects in microwave power transistors

    International Nuclear Information System (INIS)

    Graham, E.D. Jr.; Chaffin, R.J.; Gwyn, C.W.

    1973-01-01

    Microwave power transistors are usually characterized by measuring the source and load impedances, efficiency, and power output at a specified frequency and bias condition in a tuned circuit. These measurements provide limited data for circuit design and yield essentially no information concerning broadbanding possibilities. Recently, a method using large signal S-parameters has been developed which provides a rapid and repeatable means for measuring microwave power transistor parameters. These large signal S-parameters have been successfully used to design rf power amplifiers. Attempts at modeling rf power transistors have in the past been restricted to a modified Ebers-Moll procedure with numerous adjustable model parameters. The modified Ebers-Moll model is further complicated by inclusion of package parasitics. In the present paper an exact one-dimensional device analysis code has been used to model the performance of the transistor chip. This code has been integrated into the SCEPTRE circuit analysis code such that chip, package and circuit performance can be coupled together in the analysis. Using []his computational tool, rf transistor performance has been examined with particular attention given to the theoretical validity of large-signal S-parameters and the effects of nuclear radiation on device parameters. (auth)

  15. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard

    2016-10-01

    In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  16. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-10

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis

  17. Environmental Transport Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. Wasiolek

    2004-01-01

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573])

  18. Adjustment modes in the trajectory of progressive multiple sclerosis: a qualitative study and conceptual model.

    Science.gov (United States)

    Bogosian, Angeliki; Morgan, Myfanwy; Bishop, Felicity L; Day, Fern; Moss-Morris, Rona

    2017-03-01

    We examined cognitive and behavioural challenges and adaptations for people with progressive multiple sclerosis (MS) and developed a preliminary conceptual model of changes in adjustment over time. Using theoretical sampling, 34 semi-structured interviews were conducted with people with MS. Participants were between 41 and 77 years of age. Thirteen were diagnosed with primary progressive MS and 21 with secondary progressive MS. Data were analysed using a grounded theory approach. Participants described initially bracketing the illness off and carrying on their usual activities but this became problematic as the condition progressed and they employed different adjustment modes to cope with increased disabilities. Some scaled back their activities to live a more comfortable life, others identified new activities or adapted old ones, whereas at times, people disengaged from the adjustment process altogether and resigned to their condition. Relationships with partners, emotional reactions, environment and perception of the environment influenced adjustment, while people were often flexible and shifted among modes. Adjusting to a progressive condition is a fluid process. Future interventions can be tailored to address modifiable factors at different stages of the condition and may involve addressing emotional reactions concealing/revealing the condition and perceptions of the environment.

  19. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rautenstrauch

    2004-09-10

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.

  20. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    K. Rautenstrauch

    2004-01-01

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception

  1. Environmental Transport Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Wasiolek, M. A.

    2003-01-01

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699], Section 6.2). Parameter values

  2. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-06-27

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699

  3. Evolution Scenarios at the Romanian Economy Level, Using the R.M. Solow Adjusted Model

    Directory of Open Access Journals (Sweden)

    Stelian Stancu

    2008-06-01

    Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans model etc., the R.M. Solow model is part of the category which characterizes the economic growth. The paper proposes the presentation of the R.M. Solow adjusted model with specific simulation characteristics and economic growth scenario. Considering these aspects, there are presented the values obtained at the economy level, behind the simulations, about the ratio Capital on the output volume, Output volume on employee, equal with the current labour efficiency, as well as the Labour efficiency value.

  4. Regionalising Parameters of a Conceptual Rainfall-Runoff Model for ...

    African Journals Online (AJOL)

    IHACRES, a lumped conceptual rainfall-runoff model, was calibrated to six catchments ranging in size from 49km2 to 600 km2 within the upper Tana River basin to obtain a set of model parameters that characterise the hydrological behaviour within the region. Physical catchment attributes indexing topography, soil and ...

  5. Constraint on Parameters of Inverse Compton Scattering Model for ...

    Indian Academy of Sciences (India)

    B2319+60, two parameters of inverse Compton scattering model, the initial Lorentz factor and the factor of energy loss of relativistic particles are constrained. Key words. Pulsar—inverse Compton scattering—emission mechanism. 1. Introduction. Among various kinds of models for pulsar radio emission, the inverse ...

  6. Geometry parameters for musculoskeletal modelling of the shoulder system

    NARCIS (Netherlands)

    Van der Helm, F C; Veeger, DirkJan (H. E. J.); Pronk, G M; Van der Woude, L H; Rozendal, R H

    A dynamical finite-element model of the shoulder mechanism consisting of thorax, clavicula, scapula and humerus is outlined. The parameters needed for the model are obtained in a cadaver experiment consisting of both shoulders of seven cadavers. In this paper, in particular, the derivation of

  7. Rain storm models and the relationship between their parameters

    NARCIS (Netherlands)

    Stol, P.T.

    1977-01-01

    Rainfall interstation correlation functions can be obtained with the aid of analytic rainfall or storm models. Since alternative storm models have different mathematical formulas, comparison should be based on equallity of parameters like storm diameter, mean rainfall amount, storm maximum or total

  8. Parameter-less remote real-time control for the adjustment of pressure in water distribution systems

    CSIR Research Space (South Africa)

    Page, Philip R

    2017-09-01

    Full Text Available , Tshwane University of Technology, Pretoria, 0001, South Africa cMeraka Institute, Council for Scientific and Industrial Research (CSIR), Pretoria, 0184, South Africa Corresponding author email address: pagepr7@gmail.com June 2017 ∗ Abstract Reducing... of Water Resources Planning and Management, 2017, 143(9): 04017050. Link: http://ascelibrary.org/doi/full/10.1061/(ASCE)WR.1943-5452.0000805 1 control parameter values; hence the method can match onto a real-world WDS. There has been recent work...

  9. Lumped-parameters equivalent circuit for condenser microphones modeling.

    Science.gov (United States)

    Esteves, Josué; Rufer, Libor; Ekeom, Didace; Basrour, Skandar

    2017-10-01

    This work presents a lumped parameters equivalent model of condenser microphone based on analogies between acoustic, mechanical, fluidic, and electrical domains. Parameters of the model were determined mainly through analytical relations and/or finite element method (FEM) simulations. Special attention was paid to the air gap modeling and to the use of proper boundary condition. Corresponding lumped-parameters were obtained as results of FEM simulations. Because of its simplicity, the model allows a fast simulation and is readily usable for microphone design. This work shows the validation of the equivalent circuit on three real cases of capacitive microphones, including both traditional and Micro-Electro-Mechanical Systems structures. In all cases, it has been demonstrated that the sensitivity and other related data obtained from the equivalent circuit are in very good agreement with available measurement data.

  10. A software for parameter estimation in dynamic models

    Directory of Open Access Journals (Sweden)

    M. Yuceer

    2008-12-01

    Full Text Available A common problem in dynamic systems is to determine parameters in an equation used to represent experimental data. The goal is to determine the values of model parameters that provide the best fit to measured data, generally based on some type of least squares or maximum likelihood criterion. In the most general case, this requires the solution of a nonlinear and frequently non-convex optimization problem. Some of the available software lack in generality, while others do not provide ease of use. A user-interactive parameter estimation software was needed for identifying kinetic parameters. In this work we developed an integration based optimization approach to provide a solution to such problems. For easy implementation of the technique, a parameter estimation software (PARES has been developed in MATLAB environment. When tested with extensive example problems from literature, the suggested approach is proven to provide good agreement between predicted and observed data within relatively less computing time and iterations.

  11. Determination of appropriate models and parameters for premixing calculations

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ik-Kyu; Kim, Jong-Hwan; Min, Beong-Tae; Hong, Seong-Wan

    2008-03-15

    The purpose of the present work is to use experiments that have been performed at Forschungszentrum Karlsruhe during about the last ten years for determining the most appropriate models and parameters for premixing calculations. The results of a QUEOS experiment are used to fix the parameters concerning heat transfer. The QUEOS experiments are especially suited for this purpose as they have been performed with small hot solid spheres. Therefore the area of heat exchange is known. With the heat transfer parameters fixed in this way, a PREMIX experiment is recalculated. These experiments have been performed with molten alumina (Al{sub 2}O{sub 3}) as a simulant of corium. Its initial temperature is 2600 K. With these experiments the models and parameters for jet and drop break-up are tested.

  12. Determination of appropriate models and parameters for premixing calculations

    International Nuclear Information System (INIS)

    Park, Ik-Kyu; Kim, Jong-Hwan; Min, Beong-Tae; Hong, Seong-Wan

    2008-03-01

    The purpose of the present work is to use experiments that have been performed at Forschungszentrum Karlsruhe during about the last ten years for determining the most appropriate models and parameters for premixing calculations. The results of a QUEOS experiment are used to fix the parameters concerning heat transfer. The QUEOS experiments are especially suited for this purpose as they have been performed with small hot solid spheres. Therefore the area of heat exchange is known. With the heat transfer parameters fixed in this way, a PREMIX experiment is recalculated. These experiments have been performed with molten alumina (Al 2 O 3 ) as a simulant of corium. Its initial temperature is 2600 K. With these experiments the models and parameters for jet and drop break-up are tested

  13. Condition Parameter Modeling for Anomaly Detection in Wind Turbines

    Directory of Open Access Journals (Sweden)

    Yonglong Yan

    2014-05-01

    Full Text Available Data collected from the supervisory control and data acquisition (SCADA system, used widely in wind farms to obtain operational and condition information about wind turbines (WTs, is of important significance for anomaly detection in wind turbines. The paper presents a novel model for wind turbine anomaly detection mainly based on SCADA data and a back-propagation neural network (BPNN for automatic selection of the condition parameters. The SCADA data sets are determined through analysis of the cumulative probability distribution of wind speed and the relationship between output power and wind speed. The automatic BPNN-based parameter selection is for reduction of redundant parameters for anomaly detection in wind turbines. Through investigation of cases of WT faults, the validity of the automatic parameter selection-based model for WT anomaly detection is verified.

  14. Ground level enhancement (GLE) energy spectrum parameters model

    Science.gov (United States)

    Qin, G.; Wu, S.

    2017-12-01

    We study the ground level enhancement (GLE) events in solar cycle 23 with the four energy spectra parameters, the normalization parameter C, low-energy power-law slope γ 1, high-energy power-law slope γ 2, and break energy E0, obtained by Mewaldt et al. 2012 who fit the observations to the double power-law equation. we divide the GLEs into two groups, one with strong acceleration by interplanetary (IP) shocks and another one without strong acceleration according to the condition of solar eruptions. We next fit the four parameters with solar event conditions to get models of the parameters for the two groups of GLEs separately. So that we would establish a model of energy spectrum for GLEs for the future space weather prediction.

  15. Soil-related Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    A. J. Smith

    2003-01-01

    This analysis is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the geologic repository at Yucca Mountain. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN biosphere model is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003 [163602]). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. ''The Biosphere Model Report'' (BSC 2003 [160699]) describes in detail the conceptual model as well as the mathematical model and its input parameters. The purpose of this analysis was to develop the biosphere model parameters needed to evaluate doses from pathways associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation and ash

  16. Variational estimation of process parameters in a simplified atmospheric general circulation model

    Science.gov (United States)

    Lv, Guokun; Koehl, Armin; Stammer, Detlef

    2016-04-01

    Parameterizations are used to simulate effects of unresolved sub-grid-scale processes in current state-of-the-art climate model. The values of the process parameters, which determine the model's climatology, are usually manually adjusted to reduce the difference of model mean state to the observed climatology. This process requires detailed knowledge of the model and its parameterizations. In this work, a variational method was used to estimate process parameters in the Planet Simulator (PlaSim). The adjoint code was generated using automatic differentiation of the source code. Some hydrological processes were switched off to remove the influence of zero-order discontinuities. In addition, the nonlinearity of the model limits the feasible assimilation window to about 1day, which is too short to tune the model's climatology. To extend the feasible assimilation window, nudging terms for all state variables were added to the model's equations, which essentially suppress all unstable directions. In identical twin experiments, we found that the feasible assimilation window could be extended to over 1-year and accurate parameters could be retrieved. Although the nudging terms transform to a damping of the adjoint variables and therefore tend to erases the information of the data over time, assimilating climatological information is shown to provide sufficient information on the parameters. Moreover, the mechanism of this regularization is discussed.

  17. Parameters Optimization and Application to Glutamate Fermentation Model Using SVM

    OpenAIRE

    Zhang, Xiangsheng; Pan, Feng

    2015-01-01

    Aimed at the parameters optimization in support vector machine (SVM) for glutamate fermentation modelling, a new method is developed. It optimizes the SVM parameters via an improved particle swarm optimization (IPSO) algorithm which has better global searching ability. The algorithm includes detecting and handling the local convergence and exhibits strong ability to avoid being trapped in local minima. The material step of the method was shown. Simulation experiments demonstrate the effective...

  18. Parameters Optimization and Application to Glutamate Fermentation Model Using SVM

    Directory of Open Access Journals (Sweden)

    Xiangsheng Zhang

    2015-01-01

    Full Text Available Aimed at the parameters optimization in support vector machine (SVM for glutamate fermentation modelling, a new method is developed. It optimizes the SVM parameters via an improved particle swarm optimization (IPSO algorithm which has better global searching ability. The algorithm includes detecting and handling the local convergence and exhibits strong ability to avoid being trapped in local minima. The material step of the method was shown. Simulation experiments demonstrate the effectiveness of the proposed algorithm.

  19. A Bayesian framework for parameter estimation in dynamical models.

    Directory of Open Access Journals (Sweden)

    Flávio Codeço Coelho

    Full Text Available Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.

  20. A lumped parameter, low dimension model of heat exchanger

    International Nuclear Information System (INIS)

    Kanoh, Hideaki; Furushoo, Junji; Masubuchi, Masami

    1980-01-01

    This paper reports on the results of investigation of the distributed parameter model, the difference model, and the model of the method of weighted residuals for heat exchangers. By the method of weighted residuals (MWR), the opposite flow heat exchanger system is approximated by low dimension, lumped parameter model. By assuming constant specific heat, constant density, the same form of tube cross-section, the same form of the surface of heat exchange, uniform flow velocity, the linear relation of heat transfer to flow velocity, liquid heat carrier, and the thermal insulation of liquid from outside, fundamental equations are obtained. The experimental apparatus was made of acrylic resin. The response of the temperature at the exit of first liquid to the variation of the flow rate of second liquid was measured and compared with the models. The MWR model shows good approximation for the low frequency region, and as the number of division increases, good approximation spreads to higher frequency region. (Kato, T.)

  1. Dynamic optimization of a biped model: Energetic walking gaits with different mechanical and gait parameters

    Directory of Open Access Journals (Sweden)

    Kang An

    2015-05-01

    Full Text Available Energy consumption is one of the problems for bipedal robots walking. For the purpose of studying the parameter effects on the design of energetic walking bipeds with strong adaptability, we use a dynamic optimization method on our new walking model to first investigate the effects of the mechanical parameters, including mass and length distribution, on the walking efficiency. Then, we study the energetic walking gait features with the combinations of walking speed and step length. Our walking model is designed upon Srinivasan’s model. Dynamic optimization is used for a free search with minimal constraints. The results show that the cost of transport of a certain gait increases with the increase in the mass and length distribution parameters, except for that the cost of transport decreases with big length distribution parameter and long step length. We can also find a corresponding range of walking speed and step length, in which the variation in one of the two parameters has no obvious effect on the cost of transport. With fixed mechanical parameters, the cost of transport increases with the increase in the walking speed. There is a speed–step length relationship for walking with minimal cost of transport. The hip torque output strategy is adjusted in two situations to meet the walking requirements.

  2. Reservoir theory, groundwater transit time distributions, and lumped parameter models

    International Nuclear Information System (INIS)

    Etcheverry, D.; Perrochet, P.

    1999-01-01

    The relation between groundwater residence times and transit times is given by the reservoir theory. It allows to calculate theoretical transit time distributions in a deterministic way, analytically, or on numerical models. Two analytical solutions validates the piston flow and the exponential model for simple conceptual flow systems. A numerical solution of a hypothetical regional groundwater flow shows that lumped parameter models could be applied in some cases to large-scale, heterogeneous aquifers. (author)

  3. Parameters identification of photovoltaic models using an improved JAYA optimization algorithm

    International Nuclear Information System (INIS)

    Yu, Kunjie; Liang, J.J.; Qu, B.Y.; Chen, Xu; Wang, Heshan

    2017-01-01

    Highlights: • IJAYA algorithm is proposed to identify the PV model parameters efficiently. • A self-adaptive weight is introduced to purposefully adjust the search process. • Experience-based learning strategy is developed to enhance the population diversity. • Chaotic learning method is proposed to refine the quality of the best solution. • IJAYA features the superior performance in identifying parameters of PV models. - Abstract: Parameters identification of photovoltaic (PV) models based on measured current-voltage characteristic curves is significant for the simulation, evaluation, and control of PV systems. To accurately and reliably identify the parameters of different PV models, an improved JAYA (IJAYA) optimization algorithm is proposed in the paper. In IJAYA, a self-adaptive weight is introduced to adjust the tendency of approaching the best solution and avoiding the worst solution at different search stages, which enables the algorithm to approach the promising area at the early stage and implement the local search at the later stage. Furthermore, an experience-based learning strategy is developed and employed randomly to maintain the population diversity and enhance the exploration ability. A chaotic elite learning method is proposed to refine the quality of the best solution in each generation. The proposed IJAYA is used to solve the parameters identification problems of different PV models, i.e., single diode, double diode, and PV module. Comprehensive experiment results and analyses indicate that IJAYA can obtain a highly competitive performance compared with other state-of-the-state algorithms, especially in terms of accuracy and reliability.

  4. SPOTting model parameters using a ready-made Python package

    Science.gov (United States)

    Houska, Tobias; Kraft, Philipp; Breuer, Lutz

    2015-04-01

    The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for

  5. Modelling of intermittent microwave convective drying: parameter sensitivity

    Directory of Open Access Journals (Sweden)

    Zhang Zhijun

    2017-06-01

    Full Text Available The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  6. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    Science.gov (United States)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  7. On the role of modeling parameters in IMRT plan optimization

    International Nuclear Information System (INIS)

    Krause, Michael; Scherrer, Alexander; Thieke, Christian

    2008-01-01

    The formulation of optimization problems in intensity-modulated radiotherapy (IMRT) planning comprises the choice of various values such as function-specific parameters or constraint bounds. In current inverse planning programs that yield a single treatment plan for each optimization, it is often unclear how strongly these modeling parameters affect the resulting plan. This work investigates the mathematical concepts of elasticity and sensitivity to deal with this problem. An artificial planning case with a horse-shoe formed target with different opening angles surrounding a circular risk structure is studied. As evaluation functions the generalized equivalent uniform dose (EUD) and the average underdosage below and average overdosage beyond certain dose thresholds are used. A single IMRT plan is calculated for an exemplary parameter configuration. The elasticity and sensitivity of each parameter are then calculated without re-optimization, and the results are numerically verified. The results show the following. (1) elasticity can quantify the influence of a modeling parameter on the optimization result in terms of how strongly the objective function value varies under modifications of the parameter value. It also can describe how strongly the geometry of the involved planning structures affects the optimization result. (2) Based on the current parameter settings and corresponding treatment plan, sensitivity analysis can predict the optimization result for modified parameter values without re-optimization, and it can estimate the value intervals in which such predictions are valid. In conclusion, elasticity and sensitivity can provide helpful tools in inverse IMRT planning to identify the most critical parameters of an individual planning problem and to modify their values in an appropriate way

  8. Price adjustment for traditional Chinese medicine procedures: Based on a standardized value parity model.

    Science.gov (United States)

    Wang, Haiyin; Jin, Chunlin; Jiang, Qingwu

    2017-11-20

    Traditional Chinese medicine (TCM) is an important part of China's medical system. Due to the prolonged low price of TCM procedures and the lack of an effective mechanism for dynamic price adjustment, the development of TCM has markedly lagged behind Western medicine. The World Health Organization (WHO) has emphasized the need to enhance the development of alternative and traditional medicine when creating national health care systems. The establishment of scientific and appropriate mechanisms to adjust the price of medical procedures in TCM is crucial to promoting the development of TCM. This study has examined incorporating value indicators and data on basic manpower expended, time spent, technical difficulty, and the degree of risk in the latest standards for the price of medical procedures in China, and this study also offers a price adjustment model with the relative price ratio as a key index. This study examined 144 TCM procedures and found that prices of TCM procedures were mainly based on the value of medical care provided; on average, medical care provided accounted for 89% of the price. Current price levels were generally low and the current price accounted for 56% of the standardized value of a procedure, on average. Current price levels accounted for a markedly lower standardized value of acupuncture, moxibustion, special treatment with TCM, and comprehensive TCM procedures. This study selected a total of 79 procedures and adjusted them by priority. The relationship between the price of TCM procedures and the suggested price was significantly optimized (p based on a standardized value parity model is a scientific and suitable method of price adjustment that can serve as a reference for other provinces and municipalities in China and other countries and regions that mainly have fee-for-service (FFS) medical care.

  9. A compact cyclic plasticity model with parameter evolution

    DEFF Research Database (Denmark)

    Krenk, Steen; Tidemann, L.

    2017-01-01

    The paper presents a compact model for cyclic plasticity based on energy in terms of external and internal variables, and plastic yielding described by kinematic hardening and a flow potential with an additive term controlling the nonlinear cyclic hardening. The model is basically described by five...... parameters: external and internal stiffness, a yield stress and a limiting ultimate stress, and finally a parameter controlling the gradual development of plastic deformation. Calibration against numerous experimental results indicates that typically larger plastic strains develop than predicted...

  10. Climate change decision-making: Model & parameter uncertainties explored

    Energy Technology Data Exchange (ETDEWEB)

    Dowlatabadi, H.; Kandlikar, M.; Linville, C.

    1995-12-31

    A critical aspect of climate change decision-making is uncertainties in current understanding of the socioeconomic, climatic and biogeochemical processes involved. Decision-making processes are much better informed if these uncertainties are characterized and their implications understood. Quantitative analysis of these uncertainties serve to inform decision makers about the likely outcome of policy initiatives, and help set priorities for research so that outcome ambiguities faced by the decision-makers are reduced. A family of integrated assessment models of climate change have been developed at Carnegie Mellon. These models are distinguished from other integrated assessment efforts in that they were designed from the outset to characterize and propagate parameter, model, value, and decision-rule uncertainties. The most recent of these models is ICAM 2.1. This model includes representation of the processes of demographics, economic activity, emissions, atmospheric chemistry, climate and sea level change and impacts from these changes and policies for emissions mitigation, and adaptation to change. The model has over 800 objects of which about one half are used to represent uncertainty. In this paper we show, that when considering parameter uncertainties, the relative contribution of climatic uncertainties are most important, followed by uncertainties in damage calculations, economic uncertainties and direct aerosol forcing uncertainties. When considering model structure uncertainties we find that the choice of policy is often dominated by model structure choice, rather than parameter uncertainties.

  11. On the effect of model parameters on forecast objects

    Science.gov (United States)

    Marzban, Caren; Jones, Corinne; Li, Ning; Sandgathe, Scott

    2018-04-01

    Many physics-based numerical models produce a gridded, spatial field of forecasts, e.g., a temperature map. The field for some quantities generally consists of spatially coherent and disconnected objects. Such objects arise in many problems, including precipitation forecasts in atmospheric models, eddy currents in ocean models, and models of forest fires. Certain features of these objects (e.g., location, size, intensity, and shape) are generally of interest. Here, a methodology is developed for assessing the impact of model parameters on the features of forecast objects. The main ingredients of the methodology include the use of (1) Latin hypercube sampling for varying the values of the model parameters, (2) statistical clustering algorithms for identifying objects, (3) multivariate multiple regression for assessing the impact of multiple model parameters on the distribution (across the forecast domain) of object features, and (4) methods for reducing the number of hypothesis tests and controlling the resulting errors. The final output of the methodology is a series of box plots and confidence intervals that visually display the sensitivities. The methodology is demonstrated on precipitation forecasts from a mesoscale numerical weather prediction model.

  12. Parametric Adjustments to the Rankine Vortex Wind Model for Gulf of Mexico Hurricanes

    Science.gov (United States)

    2012-11-01

    2012 4. TITLE AND SUBTITLE Parametric Adjustments to the Rankine Vortex Wind Model for Gulf of Mexico Hurricanes 5a. CONTRACT NUMBER 5b. GRANT ...may be used to construct spatially varying wind fields for the GOM region (e.g., Thompson and Cardone [12]), but this requires using a complicated...Storm Damage Reduc- tion, and Dredging Operations and Environmental Research (DOER). The USACE Headquarters granted permission to publish this paper

  13. Risk-adjusted Outcomes of Clinically Relevant Pancreatic Fistula Following Pancreatoduodenectomy: A Model for Performance Evaluation.

    Science.gov (United States)

    McMillan, Matthew T; Soi, Sameer; Asbun, Horacio J; Ball, Chad G; Bassi, Claudio; Beane, Joal D; Behrman, Stephen W; Berger, Adam C; Bloomston, Mark; Callery, Mark P; Christein, John D; Dixon, Elijah; Drebin, Jeffrey A; Castillo, Carlos Fernandez-Del; Fisher, William E; Fong, Zhi Ven; House, Michael G; Hughes, Steven J; Kent, Tara S; Kunstman, John W; Malleo, Giuseppe; Miller, Benjamin C; Salem, Ronald R; Soares, Kevin; Valero, Vicente; Wolfgang, Christopher L; Vollmer, Charles M

    2016-08-01

    To evaluate surgical performance in pancreatoduodenectomy using clinically relevant postoperative pancreatic fistula (CR-POPF) occurrence as a quality indicator. Accurate assessment of surgeon and institutional performance requires (1) standardized definitions for the outcome of interest and (2) a comprehensive risk-adjustment process to control for differences in patient risk. This multinational, retrospective study of 4301 pancreatoduodenectomies involved 55 surgeons at 15 institutions. Risk for CR-POPF was assessed using the previously validated Fistula Risk Score, and pancreatic fistulas were stratified by International Study Group criteria. CR-POPF variability was evaluated and hierarchical regression analysis assessed individual surgeon and institutional performance. There was considerable variability in both CR-POPF risk and occurrence. Factors increasing the risk for CR-POPF development included increasing Fistula Risk Score (odds ratio 1.49 per point, P ratio 3.30, P performance outliers were identified at the surgeon and institutional levels. Of the top 10 surgeons (≥15 cases) for nonrisk-adjusted performance, only 6 remained in this high-performing category following risk adjustment. This analysis of pancreatic fistulas following pancreatoduodenectomy demonstrates considerable variability in both the risk and occurrence of CR-POPF among surgeons and institutions. Disparities in patient risk between providers reinforce the need for comprehensive, risk-adjusted modeling when assessing performance based on procedure-specific complications. Furthermore, beyond inherent patient risk factors, surgical decision-making influences fistula outcomes.

  14. Economic analysis of coal price-electricity price adjustment in China based on the CGE model

    International Nuclear Information System (INIS)

    He, Y.X.; Zhang, S.L.; Yang, L.Y.; Wang, Y.J.; Wang, J.

    2010-01-01

    In recent years, coal price has risen rapidly, which has also brought a sharp increase in the expenditures of thermal power plants in China. Meantime, the power production price and power retail price have not been adjusted accordingly and a large number of thermal power plants have incurred losses. The power industry is a key industry in the national economy. As such, a thorough analysis and evaluation of the economic influence of the electricity price should be conducted before electricity price adjustment is carried out. This paper analyses the influence of coal price adjustment on the electric power industry, and the influence of electricity price adjustment on the macroeconomy in China based on computable general equilibrium models. The conclusions are as follows: (1) a coal price increase causes a rise in the cost of the electric power industry, but the influence gradually descends with increase in coal price; and (2) an electricity price increase has an adverse influence on the total output, Gross Domestic Product (GDP), and the Consumer Price Index (CPI). Electricity price increases have a contractionary effect on economic development and, consequently, electricity price policy making must consequently consider all factors to minimize their adverse influence.

  15. Parameter estimation in nonlinear models for pesticide degradation

    International Nuclear Information System (INIS)

    Richter, O.; Pestemer, W.; Bunte, D.; Diekkrueger, B.

    1991-01-01

    A wide class of environmental transfer models is formulated as ordinary or partial differential equations. With the availability of fast computers, the numerical solution of large systems became feasible. The main difficulty in performing a realistic and convincing simulation of the fate of a substance in the biosphere is not the implementation of numerical techniques but rather the incomplete data basis for parameter estimation. Parameter estimation is a synonym for statistical and numerical procedures to derive reasonable numerical values for model parameters from data. The classical method is the familiar linear regression technique which dates back to the 18th century. Because it is easy to handle, linear regression has long been established as a convenient tool for analysing relationships. However, the wide use of linear regression has led to an overemphasis of linear relationships. In nature, most relationships are nonlinear and linearization often gives a poor approximation of reality. Furthermore, pure regression models are not capable to map the dynamics of a process. Therefore, realistic models involve the evolution in time (and space). This leads in a natural way to the formulation of differential equations. To establish the link between data and dynamical models, numerical advanced parameter identification methods have been developed in recent years. This paper demonstrates the application of these techniques to estimation problems in the field of pesticide dynamics. (7 refs., 5 figs., 2 tabs.)

  16. Global parameter estimation for thermodynamic models of transcriptional regulation.

    Science.gov (United States)

    Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N

    2013-07-15

    Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Conceptual Model for Simulating the Adjustments of Bankfull Characteristics in the Lower Yellow River, China

    Directory of Open Access Journals (Sweden)

    Yuanjian Wang

    2014-01-01

    Full Text Available We present a conceptual model for simulating the temporal adjustments in the banks of the Lower Yellow River (LYR. Basic conservation equations for mass, friction, and sediment transport capacity and the Exner equation were adopted to simulate the hydrodynamics underlying fluvial processes. The relationship between changing rates in bankfull width and depth, derived from quasiuniversal hydraulic geometries, was used as a closure for the hydrodynamic equations. On inputting the daily flow discharge and sediment load, the conceptual model successfully simulated the 30-year adjustments in the bankfull geometries of typical reaches of the LYR. The square of the correlating coefficient reached 0.74 for Huayuankou Station in the multiple-thread reach and exceeded 0.90 for Lijin Station in the meandering reach. This proposed model allows multiple dependent variables and the input of daily hydrological data for long-term simulations. This links the hydrodynamic and geomorphic processes in a fluvial river and has potential applicability to fluvial rivers undergoing significant adjustments.

  18. Risk-adjusted performance evaluation in three academic thoracic surgery units using the Eurolung risk models.

    Science.gov (United States)

    Pompili, Cecilia; Shargall, Yaron; Decaluwe, Herbert; Moons, Johnny; Chari, Madhu; Brunelli, Alessandro

    2018-01-03

    The objective of this study was to evaluate the performance of 3 thoracic surgery centres using the Eurolung risk models for morbidity and mortality. This was a retrospective analysis performed on data collected from 3 academic centres (2014-2016). Seven hundred and twenty-one patients in Centre 1, 857 patients in Centre 2 and 433 patients in Centre 3 who underwent anatomical lung resections were analysed. The Eurolung1 and Eurolung2 models were used to predict risk-adjusted cardiopulmonary morbidity and 30-day mortality rates. Observed and risk-adjusted outcomes were compared within each centre. The observed morbidity of Centre 1 was in line with the predicted morbidity (observed 21.1% vs predicted 22.7%, P = 0.31). Centre 2 performed better than expected (observed morbidity 20.2% vs predicted 26.7%, P models were successfully used as risk-adjusting instruments to internally audit the outcomes of 3 different centres, showing their applicability for future quality improvement initiatives. © The Author(s) 2018. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  19. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2006-06-05

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This

  20. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. Wasiolek

    2006-01-01

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This report is concerned primarily with the

  1. Biologically Inspired Visual Model With Preliminary Cognition and Active Attention Adjustment.

    Science.gov (United States)

    Qiao, Hong; Xi, Xuanyang; Li, Yinlin; Wu, Wei; Li, Fengfu

    2015-11-01

    Recently, many computational models have been proposed to simulate visual cognition process. For example, the hierarchical Max-Pooling (HMAX) model was proposed according to the hierarchical and bottom-up structure of V1 to V4 in the ventral pathway of primate visual cortex, which could achieve position- and scale-tolerant recognition. In our previous work, we have introduced memory and association into the HMAX model to simulate visual cognition process. In this paper, we improve our theoretical framework by mimicking a more elaborate structure and function of the primate visual cortex. We will mainly focus on the new formation of memory and association in visual processing under different circumstances as well as preliminary cognition and active adjustment in the inferior temporal cortex, which are absent in the HMAX model. The main contributions of this paper are: 1) in the memory and association part, we apply deep convolutional neural networks to extract various episodic features of the objects since people use different features for object recognition. Moreover, to achieve a fast and robust recognition in the retrieval and association process, different types of features are stored in separated clusters and the feature binding of the same object is stimulated in a loop discharge manner and 2) in the preliminary cognition and active adjustment part, we introduce preliminary cognition to classify different types of objects since distinct neural circuits in a human brain are used for identification of various types of objects. Furthermore, active cognition adjustment of occlusion and orientation is implemented to the model to mimic the top-down effect in human cognition process. Finally, our model is evaluated on two face databases CAS-PEAL-R1 and AR. The results demonstrate that our model exhibits its efficiency on visual recognition process with much lower memory storage requirement and a better performance compared with the traditional purely computational

  2. Iterative integral parameter identification of a respiratory mechanics model.

    Science.gov (United States)

    Schranz, Christoph; Docherty, Paul D; Chiew, Yeong Shiong; Möller, Knut; Chase, J Geoffrey

    2012-07-18

    Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual's model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS) patients. The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.

  3. Iterative integral parameter identification of a respiratory mechanics model

    Directory of Open Access Journals (Sweden)

    Schranz Christoph

    2012-07-01

    Full Text Available Abstract Background Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual’s model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. Methods An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS patients. Results The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. Conclusion These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.

  4. Estimation of group means when adjusting for covariates in generalized linear models.

    Science.gov (United States)

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  5. MODELLING BIOPHYSICAL PARAMETERS OF MAIZE USING LANDSAT 8 TIME SERIES

    Directory of Open Access Journals (Sweden)

    T. Dahms

    2016-06-01

    Full Text Available Open and free access to multi-frequent high-resolution data (e.g. Sentinel – 2 will fortify agricultural applications based on satellite data. The temporal and spatial resolution of these remote sensing datasets directly affects the applicability of remote sensing methods, for instance a robust retrieving of biophysical parameters over the entire growing season with very high geometric resolution. In this study we use machine learning methods to predict biophysical parameters, namely the fraction of absorbed photosynthetic radiation (FPAR, the leaf area index (LAI and the chlorophyll content, from high resolution remote sensing. 30 Landsat 8 OLI scenes were available in our study region in Mecklenburg-Western Pomerania, Germany. In-situ data were weekly to bi-weekly collected on 18 maize plots throughout the summer season 2015. The study aims at an optimized prediction of biophysical parameters and the identification of the best explaining spectral bands and vegetation indices. For this purpose, we used the entire in-situ dataset from 24.03.2015 to 15.10.2015. Random forest and conditional inference forests were used because of their explicit strong exploratory and predictive character. Variable importance measures allowed for analysing the relation between the biophysical parameters with respect to the spectral response, and the performance of the two approaches over the plant stock evolvement. Classical random forest regression outreached the performance of conditional inference forests, in particular when modelling the biophysical parameters over the entire growing period. For example, modelling biophysical parameters of maize for the entire vegetation period using random forests yielded: FPAR: R² = 0.85; RMSE = 0.11; LAI: R² = 0.64; RMSE = 0.9 and chlorophyll content (SPAD: R² = 0.80; RMSE=4.9. Our results demonstrate the great potential in using machine-learning methods for the interpretation of long-term multi-frequent remote sensing

  6. Modelling Biophysical Parameters of Maize Using Landsat 8 Time Series

    Science.gov (United States)

    Dahms, Thorsten; Seissiger, Sylvia; Conrad, Christopher; Borg, Erik

    2016-06-01

    Open and free access to multi-frequent high-resolution data (e.g. Sentinel - 2) will fortify agricultural applications based on satellite data. The temporal and spatial resolution of these remote sensing datasets directly affects the applicability of remote sensing methods, for instance a robust retrieving of biophysical parameters over the entire growing season with very high geometric resolution. In this study we use machine learning methods to predict biophysical parameters, namely the fraction of absorbed photosynthetic radiation (FPAR), the leaf area index (LAI) and the chlorophyll content, from high resolution remote sensing. 30 Landsat 8 OLI scenes were available in our study region in Mecklenburg-Western Pomerania, Germany. In-situ data were weekly to bi-weekly collected on 18 maize plots throughout the summer season 2015. The study aims at an optimized prediction of biophysical parameters and the identification of the best explaining spectral bands and vegetation indices. For this purpose, we used the entire in-situ dataset from 24.03.2015 to 15.10.2015. Random forest and conditional inference forests were used because of their explicit strong exploratory and predictive character. Variable importance measures allowed for analysing the relation between the biophysical parameters with respect to the spectral response, and the performance of the two approaches over the plant stock evolvement. Classical random forest regression outreached the performance of conditional inference forests, in particular when modelling the biophysical parameters over the entire growing period. For example, modelling biophysical parameters of maize for the entire vegetation period using random forests yielded: FPAR: R² = 0.85; RMSE = 0.11; LAI: R² = 0.64; RMSE = 0.9 and chlorophyll content (SPAD): R² = 0.80; RMSE=4.9. Our results demonstrate the great potential in using machine-learning methods for the interpretation of long-term multi-frequent remote sensing datasets to model

  7. Adaptive control of Parkinson's state based on a nonlinear computational model with unknown parameters.

    Science.gov (United States)

    Su, Fei; Wang, Jiang; Deng, Bin; Wei, Xi-Le; Chen, Ying-Yuan; Liu, Chen; Li, Hui-Yan

    2015-02-01

    The objective here is to explore the use of adaptive input-output feedback linearization method to achieve an improved deep brain stimulation (DBS) algorithm for closed-loop control of Parkinson's state. The control law is based on a highly nonlinear computational model of Parkinson's disease (PD) with unknown parameters. The restoration of thalamic relay reliability is formulated as the desired outcome of the adaptive control methodology, and the DBS waveform is the control input. The control input is adjusted in real time according to estimates of unknown parameters as well as the feedback signal. Simulation results show that the proposed adaptive control algorithm succeeds in restoring the relay reliability of the thalamus, and at the same time achieves accurate estimation of unknown parameters. Our findings point to the potential value of adaptive control approach that could be used to regulate DBS waveform in more effective treatment of PD.

  8. Parameter sensitivity analysis of a lumped-parameter model of a chain of lymphangions in series.

    Science.gov (United States)

    Jamalian, Samira; Bertram, Christopher D; Richardson, William J; Moore, James E

    2013-12-01

    Any disruption of the lymphatic system due to trauma or injury can lead to edema. There is no effective cure for lymphedema, partly because predictive knowledge of lymphatic system reactions to interventions is lacking. A well-developed model of the system could greatly improve our understanding of its function. Lymphangions, defined as the vessel segment between two valves, are the individual pumping units. Based on our previous lumped-parameter model of a chain of lymphangions, this study aimed to identify the parameters that affect the system output the most using a sensitivity analysis. The system was highly sensitive to minimum valve resistance, such that variations in this parameter caused an order-of-magnitude change in time-average flow rate for certain values of imposed pressure difference. Average flow rate doubled when contraction frequency was increased within its physiological range. Optimum lymphangion length was found to be some 13-14.5 diameters. A peak of time-average flow rate occurred when transmural pressure was such that the pressure-diameter loop for active contractions was centered near maximum passive vessel compliance. Increasing the number of lymphangions in the chain improved the pumping in the presence of larger adverse pressure differences. For a given pressure difference, the optimal number of lymphangions increased with the total vessel length. These results indicate that further experiments to estimate valve resistance more accurately are necessary. The existence of an optimal value of transmural pressure may provide additional guidelines for increasing pumping in areas affected by edema.

  9. X-Parameter Based Modelling of Polar Modulated Power Amplifiers

    DEFF Research Database (Denmark)

    Wang, Yelin; Nielsen, Troels Studsgaard; Sira, Daniel

    2013-01-01

    X-parameters are developed as an extension of S-parameters capable of modelling non-linear devices driven by large signals. They are suitable for devices having only radio frequency (RF) and DC ports. In a polar power amplifier (PA), phase and envelope of the input modulated signal are applied...... at separate ports and the envelope port is neither an RF nor a DC port. As a result, X-parameters may fail to characterise the effect of the envelope port excitation and consequently the polar PA. This study introduces a solution to the problem for a commercial polar PA. In this solution, the RF-phase path...... PA for simulations. The simulated error vector magnitude (EVM) and adjacent channel power ratio (ACPR) were compared with the measured data to validate the model. The maximum differences between the simulated and measured EVM and ACPR are less than 2% point and 3 dB, respectively....

  10. Identifiability and error minimization of receptor model parameters with PET

    International Nuclear Information System (INIS)

    Delforge, J.; Syrota, A.; Mazoyer, B.M.

    1989-01-01

    The identifiability problem and the general framework for experimental design optimization are presented. The methodology is applied to the problem of the receptor-ligand model parameter estimation with dynamic positron emission tomography data. The first attempts to identify the model parameters from data obtained with a single tracer injection led to disappointing numerical results. The possibility of improving parameter estimation using a new experimental design combining an injection of the labelled ligand and an injection of the cold ligand (displacement experiment) has been investigated. However, this second protocol led to two very different numerical solutions and it was necessary to demonstrate which solution was biologically valid. This has been possible by using a third protocol including both a displacement and a co-injection experiment. (authors). 16 refs.; 14 figs

  11. Modelling of nonhomogeneous atmosphere in NPP containment using lumped-parameter model based on CFD calculations

    International Nuclear Information System (INIS)

    Kljenak, I.; Mavko, B.; Babic, M.

    2005-01-01

    Full text of publication follows: The modelling and simulation of atmosphere mixing and stratification in nuclear power plant containments is a topic, which is currently being intensely investigated. With the increase of computer power, it has now become possible to model these phenomena with a local instantaneous description, using so-called Computational Fluid Dynamics (CFD) codes. However, calculations with these codes still take relatively long times. An alternative faster approach, which is also being applied, is to model nonhomogeneous atmosphere with lumped-parameter codes by dividing larger control volumes into smaller volumes, in which conditions are modelled as homogeneous. The flow between smaller volumes is modelled using one-dimensional approaches, which includes the prescription of flow loss coefficients. However, some authors have questioned this approach, as it appears that atmosphere stratification may sometimes be well simulated only by adjusting flow loss coefficients to adequate 'artificial' values that are case-dependent. To start the resolution of this issue, a modelling of nonhomogeneous atmosphere with a lumped-parameter code is proposed, where the subdivision of a large volume into smaller volumes is based on results of CFD simulations. The basic idea is to use the results of a CFD simulation to define regions, in which the flow velocities have roughly the same direction. These regions are then modelled as control volumes in a lumped-parameter model. In the proposed work, this procedure was applied to a simulation of an experiment of atmosphere mixing and stratification, which was performed in the TOSQAN facility. The facility is located at the Institut de Radioprotection et de Surete Nucleaire (IRSN) in Saclay (France) and consists of a cylindrical vessel (volume: 7 m3), in which gases are injected. In the experiment, which was also proposed for the OECD/NEA International Standard Problem No.47, air was initially present in the vessel, and

  12. Prediction of interest rate using CKLS model with stochastic parameters

    International Nuclear Information System (INIS)

    Ying, Khor Chia; Hin, Pooi Ah

    2014-01-01

    The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ (j) of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ (j) , we assume that φ (j) depends on φ (j−m) , φ (j−m+1) ,…, φ (j−1) and the interest rate r j+n at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r j+n+1 of the interest rate at the next time point when the value r j+n of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r j+n+d at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters

  13. Model parameters estimation and sensitivity by genetic algorithms

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca

    2003-01-01

    In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The

  14. Prediction of interest rate using CKLS model with stochastic parameters

    Energy Technology Data Exchange (ETDEWEB)

    Ying, Khor Chia [Faculty of Computing and Informatics, Multimedia University, Jalan Multimedia, 63100 Cyberjaya, Selangor (Malaysia); Hin, Pooi Ah [Sunway University Business School, No. 5, Jalan Universiti, Bandar Sunway, 47500 Subang Jaya, Selangor (Malaysia)

    2014-06-19

    The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ{sup (j)} of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ{sup (j)}, we assume that φ{sup (j)} depends on φ{sup (j−m)}, φ{sup (j−m+1)},…, φ{sup (j−1)} and the interest rate r{sub j+n} at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r{sub j+n+1} of the interest rate at the next time point when the value r{sub j+n} of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r{sub j+n+d} at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.

  15. Mathematical models to predict rheological parameters of lateritic hydromixtures

    Directory of Open Access Journals (Sweden)

    Gabriel Hernández-Ramírez

    2017-10-01

    Full Text Available The present work had as objective to establish mathematical models that allow the prognosis of the rheological parameters of the lateritic pulp at concentrations of solids from 35% to 48%, temperature of the preheated hydromixture superior to 82 ° C and number of mineral between 3 and 16. Four samples of lateritic pulp were used in the study at different process locations. The results allowed defining that the plastic properties of the lateritic pulp in the conditions of this study conform to the Herschel-Bulkley model for real plastics. In addition, they show that for current operating conditions, even for new situations, UPD mathematical models have a greater ability to predict rheological parameters than least squares mathematical models.

  16. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  17. Comparisons of criteria in the assessment model parameter optimizations

    International Nuclear Information System (INIS)

    Liu Xinhe; Zhang Yongxing

    1993-01-01

    Three criteria (chi square, relative chi square and correlation coefficient) used in model parameter optimization (MPO) process that aims at significant reduction of prediction uncertainties were discussed and compared to each other with the aid of a well-controlled tracer experiment

  18. Revised models and genetic parameter estimates for production and ...

    African Journals Online (AJOL)

    Genetic parameters for production and reproduction traits in the Elsenburg Dormer sheep stud were estimated using records of 11743 lambs born between 1943 and 2002. An animal model with direct and maternal additive, maternal permanent and temporary environmental effects was fitted for traits considered traits of the ...

  19. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  20. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  1. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    response during excitation and the geometrical damping related to free vibrations of a hexagonal footing. The optimal order of a lumped-parameter model is determined for each degree of freedom, i.e. horizontal and vertical translation as well as torsion and rocking. In particular, the necessity of coupling...... between horizontal sliding and rocking is discussed....

  2. Key processes and input parameters for environmental tritium models

    International Nuclear Information System (INIS)

    Bunnenberg, C.; Taschner, M.; Ogram, G.L.

    1994-01-01

    The primary objective of the work reported here is to define key processes and input parameters for mathematical models of environmental tritium behaviour adequate for use in safety analysis and licensing of fusion devices like NET and associated tritium handling facilities. (author). 45 refs., 3 figs

  3. Key processes and input parameters for environmental tritium models

    Energy Technology Data Exchange (ETDEWEB)

    Bunnenberg, C; Taschner, M [Niedersaechsisches Inst. fuer Radiooekologie, Hannover (Germany); Ogram, G L [Ontario Hydro, Toronto, ON (Canada)

    1994-12-31

    The primary objective of the work reported here is to define key processes and input parameters for mathematical models of environmental tritium behaviour adequate for use in safety analysis and licensing of fusion devices like NET and associated tritium handling facilities. (author). 45 refs., 3 figs.

  4. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-09-24

    This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air

  5. Integrating microbial diversity in soil carbon dynamic models parameters

    Science.gov (United States)

    Louis, Benjamin; Menasseri-Aubry, Safya; Leterme, Philippe; Maron, Pierre-Alain; Viaud, Valérie

    2015-04-01

    Faced with the numerous concerns about soil carbon dynamic, a large quantity of carbon dynamic models has been developed during the last century. These models are mainly in the form of deterministic compartment models with carbon fluxes between compartments represented by ordinary differential equations. Nowadays, lots of them consider the microbial biomass as a compartment of the soil organic matter (carbon quantity). But the amount of microbial carbon is rarely used in the differential equations of the models as a limiting factor. Additionally, microbial diversity and community composition are mostly missing, although last advances in soil microbial analytical methods during the two past decades have shown that these characteristics play also a significant role in soil carbon dynamic. As soil microorganisms are essential drivers of soil carbon dynamic, the question about explicitly integrating their role have become a key issue in soil carbon dynamic models development. Some interesting attempts can be found and are dominated by the incorporation of several compartments of different groups of microbial biomass in terms of functional traits and/or biogeochemical compositions to integrate microbial diversity. However, these models are basically heuristic models in the sense that they are used to test hypotheses through simulations. They have rarely been confronted to real data and thus cannot be used to predict realistic situations. The objective of this work was to empirically integrate microbial diversity in a simple model of carbon dynamic through statistical modelling of the model parameters. This work is based on available experimental results coming from a French National Research Agency program called DIMIMOS. Briefly, 13C-labelled wheat residue has been incorporated into soils with different pedological characteristics and land use history. Then, the soils have been incubated during 104 days and labelled and non-labelled CO2 fluxes have been measured at ten

  6. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. A. Wasiolek

    2003-01-01

    This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air inhaled by a receptor. Concentrations in air to which the

  7. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rasmuson; K. Rautenstrauch

    2004-09-14

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.

  8. Evaluation of some infiltration models and hydraulic parameters

    International Nuclear Information System (INIS)

    Haghighi, F.; Gorji, M.; Shorafa, M.; Sarmadian, F.; Mohammadi, M. H.

    2010-01-01

    The evaluation of infiltration characteristics and some parameters of infiltration models such as sorptivity and final steady infiltration rate in soils are important in agriculture. The aim of this study was to evaluate some of the most common models used to estimate final soil infiltration rate. The equality of final infiltration rate with saturated hydraulic conductivity (Ks) was also tested. Moreover, values of the estimated sorptivity from the Philips model were compared to estimates by selected pedotransfer functions (PTFs). The infiltration experiments used the doublering method on soils with two different land uses in the Taleghan watershed of Tehran province, Iran, from September to October, 2007. The infiltration models of Kostiakov-Lewis, Philip two-term and Horton were fitted to observed infiltration data. Some parameters of the models and the coefficient of determination goodness of fit were estimated using MATLAB software. The results showed that, based on comparing measured and model-estimated infiltration rate using root mean squared error (RMSE), Hortons model gave the best prediction of final infiltration rate in the experimental area. Laboratory measured Ks values gave significant differences and higher values than estimated final infiltration rates from the selected models. The estimated final infiltration rate was not equal to laboratory measured Ks values in the study area. Moreover, the estimated sorptivity factor by Philips model was significantly different to those estimated by selected PTFs. It is suggested that the applicability of PTFs is limited to specific, similar conditions. (Author) 37 refs.

  9. Electro-optical parameters of bond polarizability model for aluminosilicates.

    Science.gov (United States)

    Smirnov, Konstantin S; Bougeard, Daniel; Tandon, Poonam

    2006-04-06

    Electro-optical parameters (EOPs) of bond polarizability model (BPM) for aluminosilicate structures were derived from quantum-chemical DFT calculations of molecular models. The tensor of molecular polarizability and the derivatives of the tensor with respect to the bond length are well reproduced with the BPM, and the EOPs obtained are in a fair agreement with available experimental data. The parameters derived were found to be transferable to larger molecules. This finding suggests that the procedure used can be applied to systems with partially ionic chemical bonds. The transferability of the parameters to periodic systems was tested in molecular dynamics simulation of the polarized Raman spectra of alpha-quartz. It appeared that the molecular Si-O bond EOPs failed to reproduce the intensity of peaks in the spectra. This limitation is due to large values of the longitudinal components of the bond polarizability and its derivative found in the molecular calculations as compared to those obtained from periodic DFT calculations of crystalline silica polymorphs by Umari et al. (Phys. Rev. B 2001, 63, 094305). It is supposed that the electric field of the solid is responsible for the difference of the parameters. Nevertheless, the EOPs obtained can be used as an initial set of parameters for calculations of polarizability related characteristics of relevant systems in the framework of BPM.

  10. Estimating model parameters in nonautonomous chaotic systems using synchronization

    International Nuclear Information System (INIS)

    Yang, Xiaoli; Xu, Wei; Sun, Zhongkui

    2007-01-01

    In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation

  11. Soil-Related Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Smith, A. J.

    2004-01-01

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure was defined as AP-SIII.9Q, ''Scientific Analyses''. This

  12. Soil-Related Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    A. J. Smith

    2004-09-09

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure

  13. Characterizing and Addressing the Need for Statistical Adjustment of Global Climate Model Data

    Science.gov (United States)

    White, K. D.; Baker, B.; Mueller, C.; Villarini, G.; Foley, P.; Friedman, D.

    2017-12-01

    As part of its mission to research and measure the effects of the changing climate, the U. S. Army Corps of Engineers (USACE) regularly uses the World Climate Research Programme's Coupled Model Intercomparison Project Phase 5 (CMIP5) multi-model dataset. However, these data are generated at a global level and are not fine-tuned for specific watersheds. This often causes CMIP5 output to vary from locally observed patterns in the climate. Several downscaling methods have been developed to increase the resolution of the CMIP5 data and decrease systemic differences to support decision-makers as they evaluate results at the watershed scale. Evaluating preliminary comparisons of observed and projected flow frequency curves over the US revealed a simple framework for water resources decision makers to plan and design water resources management measures under changing conditions using standard tools. Using this framework as a basis, USACE has begun to explore to use of statistical adjustment to alter global climate model data to better match the locally observed patterns while preserving the general structure and behavior of the model data. When paired with careful measurement and hypothesis testing, statistical adjustment can be particularly effective at navigating the compromise between the locally observed patterns and the global climate model structures for decision makers.

  14. Remote Sensing-based Methodologies for Snow Model Adjustments in Operational Streamflow Prediction

    Science.gov (United States)

    Bender, S.; Miller, W. P.; Bernard, B.; Stokes, M.; Oaida, C. M.; Painter, T. H.

    2015-12-01

    Water management agencies rely on hydrologic forecasts issued by operational agencies such as NOAA's Colorado Basin River Forecast Center (CBRFC). The CBRFC has partnered with the Jet Propulsion Laboratory (JPL) under funding from NASA to incorporate research-oriented, remotely-sensed snow data into CBRFC operations and to improve the accuracy of CBRFC forecasts. The partnership has yielded valuable analysis of snow surface albedo as represented in JPL's MODIS Dust Radiative Forcing in Snow (MODDRFS) data, across the CBRFC's area of responsibility. When dust layers within a snowpack emerge, reducing the snow surface albedo, the snowmelt rate may accelerate. The CBRFC operational snow model (SNOW17) is a temperature-index model that lacks explicit representation of snowpack surface albedo. CBRFC forecasters monitor MODDRFS data for emerging dust layers and may manually adjust SNOW17 melt rates. A technique was needed for efficient and objective incorporation of the MODDRFS data into SNOW17. Initial development focused in Colorado, where dust-on-snow events frequently occur. CBRFC forecasters used retrospective JPL-CBRFC analysis and developed a quantitative relationship between MODDRFS data and mean areal temperature (MAT) data. The relationship was used to generate adjusted, MODDRFS-informed input for SNOW17. Impacts of the MODDRFS-SNOW17 MAT adjustment method on snowmelt-driven streamflow prediction varied spatially and with characteristics of the dust deposition events. The largest improvements occurred in southwestern Colorado, in years with intense dust deposition events. Application of the method in other regions of Colorado and in "low dust" years resulted in minimal impact. The MODDRFS-SNOW17 MAT technique will be implemented in CBRFC operations in late 2015, prior to spring 2016 runoff. Collaborative investigation of remote sensing-based adjustment methods for the CBRFC operational hydrologic forecasting environment will continue over the next several years.

  15. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    Science.gov (United States)

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  16. Mass balance model parameter transferability on a tropical glacier

    Science.gov (United States)

    Gurgiser, Wolfgang; Mölg, Thomas; Nicholson, Lindsey; Kaser, Georg

    2013-04-01

    The mass balance and melt water production of glaciers is of particular interest in the Peruvian Andes where glacier melt water has markedly increased water supply during the pronounced dry seasons in recent decades. However, the melt water contribution from glaciers is projected to decrease with appreciable negative impacts on the local society within the coming decades. Understanding mass balance processes on tropical glaciers is a prerequisite for modeling present and future glacier runoff. As a first step towards this aim we applied a process-based surface mass balance model in order to calculate observed ablation at two stakes in the ablation zone of Shallap Glacier (4800 m a.s.l., 9°S) in the Cordillera Blanca, Peru. Under the tropical climate, the snow line migrates very frequently across most of the ablation zone all year round causing large temporal and spatial variations of glacier surface conditions and related ablation. Consequently, pronounced differences between the two chosen stakes and the two years were observed. Hourly records of temperature, humidity, wind speed, short wave incoming radiation, and precipitation are available from an automatic weather station (AWS) on the moraine near the glacier for the hydrological years 2006/07 and 2007/08 while stake readings are available at intervals of between 14 to 64 days. To optimize model parameters, we used 1000 model simulations in which the most sensitive model parameters were varied randomly within their physically meaningful ranges. The modeled surface height change was evaluated against the two stake locations in the lower ablation zone (SH11, 4760m) and in the upper ablation zone (SH22, 4816m), respectively. The optimal parameter set for each point achieved good model skill but if we transfer the best parameter combination from one stake site to the other stake site model errors increases significantly. The same happens if we optimize the model parameters for each year individually and transfer

  17. Constraining statistical-model parameters using fusion and spallation reactions

    Directory of Open Access Journals (Sweden)

    Charity Robert J.

    2011-10-01

    Full Text Available The de-excitation of compound nuclei has been successfully described for several decades by means of statistical models. However, such models involve a large number of free parameters and ingredients that are often underconstrained by experimental data. We show how the degeneracy of the model ingredients can be partially lifted by studying different entrance channels for de-excitation, which populate different regions of the parameter space of the compound nucleus. Fusion reactions, in particular, play an important role in this strategy because they fix three out of four of the compound-nucleus parameters (mass, charge and total excitation energy. The present work focuses on fission and intermediate-mass-fragment emission cross sections. We prove how equivalent parameter sets for fusion-fission reactions can be resolved using another entrance channel, namely spallation reactions. Intermediate-mass-fragment emission can be constrained in a similar way. An interpretation of the best-fit IMF barriers in terms of the Wigner energies of the nascent fragments is discussed.

  18. Investigation of RADTRAN Stop Model input parameters for truck stops

    International Nuclear Information System (INIS)

    Griego, N.R.; Smith, J.D.; Neuhauser, K.S.

    1996-01-01

    RADTRAN is a computer code for estimating the risks and consequences as transport of radioactive materials (RAM). RADTRAN was developed and is maintained by Sandia National Laboratories for the US Department of Energy (DOE). For incident-free transportation, the dose to persons exposed while the shipment is stopped is frequently a major percentage of the overall dose. This dose is referred to as Stop Dose and is calculated by the Stop Model. Because stop dose is a significant portion of the overall dose associated with RAM transport, the values used as input for the Stop Model are important. Therefore, an investigation of typical values for RADTRAN Stop Parameters for truck stops was performed. The resulting data from these investigations were analyzed to provide mean values, standard deviations, and histograms. Hence, the mean values can be used when an analyst does not have a basis for selecting other input values for the Stop Model. In addition, the histograms and their characteristics can be used to guide statistical sampling techniques to measure sensitivity of the RADTRAN calculated Stop Dose to the uncertainties in the stop model input parameters. This paper discusses the details and presents the results of the investigation of stop model input parameters at truck stops

  19. Updated climatological model predictions of ionospheric and HF propagation parameters

    International Nuclear Information System (INIS)

    Reilly, M.H.; Rhoads, F.J.; Goodman, J.M.; Singh, M.

    1991-01-01

    The prediction performances of several climatological models, including the ionospheric conductivity and electron density model, RADAR C, and Ionospheric Communications Analysis and Predictions Program, are evaluated for different regions and sunspot number inputs. Particular attention is given to the near-real-time (NRT) predictions associated with single-station updates. It is shown that a dramatic improvement can be obtained by using single-station ionospheric data to update the driving parameters for an ionospheric model for NRT predictions of f(0)F2 and other ionospheric and HF circuit parameters. For middle latitudes, the improvement extends out thousands of kilometers from the update point to points of comparable corrected geomagnetic latitude. 10 refs

  20. Statistical approach for uncertainty quantification of experimental modal model parameters

    DEFF Research Database (Denmark)

    Luczak, M.; Peeters, B.; Kahsin, M.

    2014-01-01

    Composite materials are widely used in manufacture of aerospace and wind energy structural components. These load carrying structures are subjected to dynamic time-varying loading conditions. Robust structural dynamics identification procedure impose tight constraints on the quality of modal models...... represent different complexity levels ranging from coupon, through sub-component up to fully assembled aerospace and wind energy structural components made of composite materials. The proposed method is demonstrated on two application cases of a small and large wind turbine blade........ This paper aims at a systematic approach for uncertainty quantification of the parameters of the modal models estimated from experimentally obtained data. Statistical analysis of modal parameters is implemented to derive an assessment of the entire modal model uncertainty measure. Investigated structures...

  1. Influential input parameters for reflood model of MARS code

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Deog Yeon; Bang, Young Seok [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2012-10-15

    Best Estimate (BE) calculation has been more broadly used in nuclear industries and regulations to reduce the significant conservatism for evaluating Loss of Coolant Accident (LOCA). Reflood model has been identified as one of the problems in BE calculation. The objective of the Post BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) program of OECD/NEA is to make progress the issue of the quantification of the uncertainty of the physical models in system thermal hydraulic codes, by considering an experimental result especially for reflood. It is important to establish a methodology to identify and select the parameters influential to the response of reflood phenomena following Large Break LOCA. For this aspect, a reference calculation and sensitivity analysis to select the dominant influential parameters for FEBA experiment are performed.

  2. Four-parameter analytical local model potential for atoms

    International Nuclear Information System (INIS)

    Fei, Yu; Jiu-Xun, Sun; Rong-Gang, Tian; Wei, Yang

    2009-01-01

    Analytical local model potential for modeling the interaction in an atom reduces the computational effort in electronic structure calculations significantly. A new four-parameter analytical local model potential is proposed for atoms Li through Lr, and the values of four parameters are shell-independent and obtained by fitting the results of X a method. At the same time, the energy eigenvalues, the radial wave functions and the total energies of electrons are obtained by solving the radial Schrödinger equation with a new form of potential function by Numerov's numerical method. The results show that our new form of potential function is suitable for high, medium and low Z atoms. A comparison among the new potential function and other analytical potential functions shows the greater flexibility and greater accuracy of the present new potential function. (atomic and molecular physics)

  3. Dynamically adjustable foot-ground contact model to estimate ground reaction force during walking and running.

    Science.gov (United States)

    Jung, Yihwan; Jung, Moonki; Ryu, Jiseon; Yoon, Sukhoon; Park, Sang-Kyoon; Koo, Seungbum

    2016-03-01

    Human dynamic models have been used to estimate joint kinetics during various activities. Kinetics estimation is in demand in sports and clinical applications where data on external forces, such as the ground reaction force (GRF), are not available. The purpose of this study was to estimate the GRF during gait by utilizing distance- and velocity-dependent force models between the foot and ground in an inverse-dynamics-based optimization. Ten males were tested as they walked at four different speeds on a force plate-embedded treadmill system. The full-GRF model whose foot-ground reaction elements were dynamically adjusted according to vertical displacement and anterior-posterior speed between the foot and ground was implemented in a full-body skeletal model. The model estimated the vertical and shear forces of the GRF from body kinematics. The shear-GRF model with dynamically adjustable shear reaction elements according to the input vertical force was also implemented in the foot of a full-body skeletal model. Shear forces of the GRF were estimated from body kinematics, vertical GRF, and center of pressure. The estimated full GRF had the lowest root mean square (RMS) errors at the slow walking speed (1.0m/s) with 4.2, 1.3, and 5.7% BW for anterior-posterior, medial-lateral, and vertical forces, respectively. The estimated shear forces were not significantly different between the full-GRF and shear-GRF models, but the RMS errors of the estimated knee joint kinetics were significantly lower for the shear-GRF model. Providing COP and vertical GRF with sensors, such as an insole-type pressure mat, can help estimate shear forces of the GRF and increase accuracy for estimation of joint kinetics. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Application of parameters space analysis tools for empirical model validation

    Energy Technology Data Exchange (ETDEWEB)

    Paloma del Barrio, E. [LEPT-ENSAM UMR 8508, Talence (France); Guyon, G. [Electricite de France, Moret-sur-Loing (France)

    2004-01-01

    A new methodology for empirical model validation has been proposed in the framework of the Task 22 (Building Energy Analysis Tools) of the International Energy Agency. It involves two main steps: checking model validity and diagnosis. Both steps, as well as the underlying methods, have been presented in the first part of the paper. In this part, they are applied for testing modelling hypothesis in the framework of the thermal analysis of an actual building. Sensitivity analysis tools have been first used to identify the parts of the model that can be really tested on the available data. A preliminary diagnosis is then supplied by principal components analysis. Useful information for model behaviour improvement has been finally obtained by optimisation techniques. This example of application shows how model parameters space analysis is a powerful tool for empirical validation. In particular, diagnosis possibilities are largely increased in comparison with residuals analysis techniques. (author)

  5. Family support and acceptance, gay male identity formation, and psychological adjustment: a path model.

    Science.gov (United States)

    Elizur, Y; Ziv, M

    2001-01-01

    While heterosexist family undermining has been demonstrated to be a developmental risk factor in the life of persons with same-gender orientation, the issue of protective family factors is both controversial and relatively neglected. In this study of Israeli gay males (N = 114), we focused on the interrelations of family support, family acceptance and family knowledge of gay orientation, and gay male identity formation, and their effects on mental health and self-esteem. A path model was proposed based on the hypotheses that family support, family acceptance, family knowledge, and gay identity formation have an impact on psychological adjustment, and that family support has an effect on gay identity formation that is mediated by family acceptance. The assessment of gay identity formation was based on an established stage model that was streamlined for cross-cultural practice by defining three basic processes of same-gender identity formation: self-definition, self-acceptance, and disclosure (Elizur & Mintzer, 2001). The testing of our conceptual path model demonstrated an excellent fit with the data. An alternative model that hypothesized effects of gay male identity on family acceptance and family knowledge did not fit the data. Interpreting these results, we propose that the main effect of family support/acceptance on gay identity is related to the process of disclosure, and that both general family support and family acceptance of same-gender orientation play a significant role in the psychological adjustment of gay men.

  6. Test models for improving filtering with model errors through stochastic parameter estimation

    International Nuclear Information System (INIS)

    Gershgorin, B.; Harlim, J.; Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  7. Model parameter learning using Kullback-Leibler divergence

    Science.gov (United States)

    Lin, Chungwei; Marks, Tim K.; Pajovic, Milutin; Watanabe, Shinji; Tung, Chih-kuan

    2018-02-01

    In this paper, we address the following problem: For a given set of spin configurations whose probability distribution is of the Boltzmann type, how do we determine the model coupling parameters? We demonstrate that directly minimizing the Kullback-Leibler divergence is an efficient method. We test this method against the Ising and XY models on the one-dimensional (1D) and two-dimensional (2D) lattices, and provide two estimators to quantify the model quality. We apply this method to two types of problems. First, we apply it to the real-space renormalization group (RG). We find that the obtained RG flow is sufficiently good for determining the phase boundary (within 1% of the exact result) and the critical point, but not accurate enough for critical exponents. The proposed method provides a simple way to numerically estimate amplitudes of the interactions typically truncated in the real-space RG procedure. Second, we apply this method to the dynamical system composed of self-propelled particles, where we extract the parameter of a statistical model (a generalized XY model) from a dynamical system described by the Viscek model. We are able to obtain reasonable coupling values corresponding to different noise strengths of the Viscek model. Our method is thus able to provide quantitative analysis of dynamical systems composed of self-propelled particles.

  8. A Generic Model for Relative Adjustment Between Optical Sensors Using Rigorous Orbit Mechanics

    Directory of Open Access Journals (Sweden)

    B. Islam

    2008-06-01

    Full Text Available The classical calibration or space resection is the fundamental task in photogrammetry. The lack of sufficient knowledge of interior and exterior orientation parameters lead to unreliable results in the photogrammetric process. One of the earliest in approaches using in photogrammetry was the plumb line calibration method. This method is suitable to recover the radial and decentering lens distortion coefficients, while the remaining interior(focal length and principal point coordinates and exterior orientation parameters have to be determined by a complimentary method. As the lens distortion remains very less it not considered as the interior orientation parameters, in the present rigorous sensor model. There are several other available methods based on the photogrammetric collinearity equations, which consider the determination of exterior orientation parameters, with no mention to the simultaneous determination of inner orientation parameters. Normal space resection methods solve the problem using control points, whose coordinates are known both in image and object reference systems. The non-linearity of the model and the problems, in point location in digital images and identifying the maximum GPS measured control points are the main drawbacks of the classical approaches. This paper addresses mathematical model based on the fundamental assumption of collineariy of three points of two Along-Track Stereo imagery sensors and independent object point. Assuming this condition it is possible to extract the exterior orientation (EO parameters for a long strip and single image together, without and with using the control points. Moreover, after extracting the EO parameters the accuracy for satellite data products are compared in with using single and with no control points.

  9. Optimization of parameters for fitting linear accelerator photon beams using a modified CBEAM model

    International Nuclear Information System (INIS)

    Ayyangar, K.; Daftari, I.; Palta, J.; Suntharalingam, N.

    1989-01-01

    Measured beam profiles and central-axis depth-dose data for 6- and 25-MV photon beams are used to generate a dose matrix which represents the full beam. A corresponding dose matrix is also calculated using the modified CBEAM model. The calculational model uses the usual set of three parameters to define the intensity at beam edges and the parameter that accounts for collimator transmission. An additional set of three parameters is used for the primary profile factor, expressed as a function of distance from the central axis. An optimization program has been adapted to automatically adjust these parameters to minimize the χ 2 between the measured and calculated data. The average values of the parameters for small (6x6 cm 2 ), medium (10x10 cm 2 ), and large (20x20 cm 2 ) field sizes are found to represent the beam adequately for all field sizes. The calculated and the measured doses at any point agree to within 2% for any field size in the range 4x4 to 40x40 cm 2

  10. Facial motion parameter estimation and error criteria in model-based image coding

    Science.gov (United States)

    Liu, Yunhai; Yu, Lu; Yao, Qingdong

    2000-04-01

    Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.

  11. Biosphere modelling for a HLW repository - scenario and parameter variations

    International Nuclear Information System (INIS)

    Grogan, H.

    1985-03-01

    In Switzerland high-level radioactive wastes have been considered for disposal in deep-lying crystalline formations. The individual doses to man resulting from radionuclides entering the biosphere via groundwater transport are calculated. The main recipient area modelled, which constitutes the base case, is a broad gravel terrace sited along the south bank of the river Rhine. An alternative recipient region, a small valley with a well, is also modelled. A number of parameter variations are performed in order to ascertain their impact on the doses. Finally two scenario changes are modelled somewhat simplistically, these consider different prevailing climates, namely tundra and a warmer climate than present. In the base case negligibly low doses to man in the long term, resulting from the existence of a HLW repository have been calculated. Cs-135 results in the largest dose (8.4E-7 mrem/y at 6.1E+6 y) while Np-237 gives the largest dose from the actinides (3.6E-8 mrem/y). The response of the model to parameter variations cannot be easily predicted due to non-linear coupling of many of the parameters. However, the calculated doses were negligibly low in all cases as were those resulting from the two scenario variations. (author)

  12. Influence of Population Variation of Physiological Parameters in Computational Models of Space Physiology

    Science.gov (United States)

    Myers, J. G.; Feola, A.; Werner, C.; Nelson, E. S.; Raykin, J.; Samuels, B.; Ethier, C. R.

    2016-01-01

    The earliest manifestations of Visual Impairment and Intracranial Pressure (VIIP) syndrome become evident after months of spaceflight and include a variety of ophthalmic changes, including posterior globe flattening and distension of the optic nerve sheath. Prevailing evidence links the occurrence of VIIP to the cephalic fluid shift induced by microgravity and the subsequent pressure changes around the optic nerve and eye. Deducing the etiology of VIIP is challenging due to the wide range of physiological parameters that may be influenced by spaceflight and are required to address a realistic spectrum of physiological responses. Here, we report on the application of an efficient approach to interrogating physiological parameter space through computational modeling. Specifically, we assess the influence of uncertainty in input parameters for two models of VIIP syndrome: a lumped-parameter model (LPM) of the cardiovascular and central nervous systems, and a finite-element model (FEM) of the posterior eye, optic nerve head (ONH) and optic nerve sheath. Methods: To investigate the parameter space in each model, we employed Latin hypercube sampling partial rank correlation coefficient (LHSPRCC) strategies. LHS techniques outperform Monte Carlo approaches by enforcing efficient sampling across the entire range of all parameters. The PRCC method estimates the sensitivity of model outputs to these parameters while adjusting for the linear effects of all other inputs. The LPM analysis addressed uncertainties in 42 physiological parameters, such as initial compartmental volume and nominal compartment percentage of total cardiac output in the supine state, while the FEM evaluated the effects on biomechanical strain from uncertainties in 23 material and pressure parameters for the ocular anatomy. Results and Conclusion: The LPM analysis identified several key factors including high sensitivity to the initial fluid distribution. The FEM study found that intraocular pressure and

  13. Contaminant transport in aquifers: improving the determination of model parameters

    International Nuclear Information System (INIS)

    Sabino, C.V.S.; Moreira, R.M.; Lula, Z.L.; Chausson, Y.; Magalhaes, W.F.; Vianna, M.N.

    1998-01-01

    Parameters conditioning the migration behavior of cesium and mercury are measured with their tracers 137 Cs and 203 Hg in the laboratory, using both batch and column experiments. Batch tests were used to define the sorption isotherm characteristics. Also investigated were the influences of some test parameters, in particular those due to the volume of water to mass of soil ratio (V/m). A provisional relationship between V/m and the distribution coefficient, K d , has been advanced, and a procedure to estimate K d 's valid for environmental values of the ratio V/m has been suggested. Column tests provided the parameters for a transport model. A major problem to be dealt with in such tests is the collimation of the radioactivity probe. Besides mechanically optimizing the collimator, a deconvolution procedure has been suggested and tested, with statistical criteria, to filter off both noise and spurious tracer signals. Correction procedures for the integrating effect introduced by sampling at the exit of columns have also been developed. These techniques may be helpful in increasing the accuracy required in the measurement of parameters conditioning contaminant migration in soils, thus allowing more reliable predictions based on mathematical model applications. (author)

  14. Analisis Portofolio Optimum Saham Syariah Menggunakan Liquidity Adjusted Capital Asset Pricing Model (LCAPM

    Directory of Open Access Journals (Sweden)

    Nila Cahyati

    2015-04-01

    Full Text Available Investasi mempunyai karakteristik antara return dan resiko. Pembentukan portofolio optimal digunakan untuk memaksimalkan keuntungan dan meminimumkan resiko. Liquidity Adjusted Capital Asset Pricing Model (LCAPM merupakan metode pengembangan baru dari CAPM yang dipengaruhi likuiditas. Indikator likuiditas apabila digabungkan dengan metode CAPM dapat membantu memaksimalkan return dan meminimumkan resiko. Tujuan penelitian adalah membandingkan expected retun dan resiko saham serta mengetahui proporsi pada portofolio optimal. Sampel yang digunakan merupakan saham JII (Jakarta Islamic Index  periode Januari 2013 – November 2014. Hasil penelitian menunjukkan bahwa expected return portofolio LCAPM sebesar 0,0956 dengan resiko 0,0043 yang membentuk proporsi saham AALI (55,19% dan saham PGAS (44,81%.

  15. An Efficient Bundle Adjustment Model Based on Parallax Parametrization for Environmental Monitoring

    Science.gov (United States)

    Chen, R.; Sun, Y. Y.; Lei, Y.

    2017-12-01

    With the rapid development of Unmanned Aircraft Systems (UAS), more and more research fields have been successfully equipped with this mature technology, among which is environmental monitoring. One difficult task is how to acquire accurate position of ground object in order to reconstruct the scene more accurate. To handle this problem, we combine bundle adjustment method from Photogrammetry with parallax parametrization from Computer Vision to create a new method call APCP (aerial polar-coordinate photogrammetry). One impressive advantage of this method compared with traditional method is that the 3-dimensional point in space is represented using three angles (elevation angle, azimuth angle and parallax angle) rather than the XYZ value. As the basis for APCP, bundle adjustment could be used to optimize the UAS sensors' pose accurately, reconstruct the 3D models of environment, thus serving as the criterion of accurate position for monitoring. To verity the effectiveness of the proposed method, we test on several UAV dataset obtained by non-metric digital cameras with large attitude angles, and we find that our methods could achieve 1 or 2 times better efficiency with no loss of accuracy than traditional ones. For the classical nonlinear optimization of bundle adjustment model based on the rectangular coordinate, it suffers the problem of being seriously dependent on the initial values, making it unable to converge fast or converge to a stable state. On the contrary, APCP method could deal with quite complex condition of UAS when conducting monitoring as it represent the points in space with angles, including the condition that the sequential images focusing on one object have zero parallax angle. In brief, this paper presents the parameterization of 3D feature points based on APCP, and derives a full bundle adjustment model and the corresponding nonlinear optimization problems based on this method. In addition, we analyze the influence of convergence and

  16. Propagation channel characterization, parameter estimation, and modeling for wireless communications

    CERN Document Server

    Yin, Xuefeng

    2016-01-01

    Thoroughly covering channel characteristics and parameters, this book provides the knowledge needed to design various wireless systems, such as cellular communication systems, RFID and ad hoc wireless communication systems. It gives a detailed introduction to aspects of channels before presenting the novel estimation and modelling techniques which can be used to achieve accurate models. To systematically guide readers through the topic, the book is organised in three distinct parts. The first part covers the fundamentals of the characterization of propagation channels, including the conventional single-input single-output (SISO) propagation channel characterization as well as its extension to multiple-input multiple-output (MIMO) cases. Part two focuses on channel measurements and channel data post-processing. Wideband channel measurements are introduced, including the equipment, technology and advantages and disadvantages of different data acquisition schemes. The channel parameter estimation methods are ...

  17. Empirical flow parameters : a tool for hydraulic model validity

    Science.gov (United States)

    Asquith, William H.; Burley, Thomas E.; Cleveland, Theodore G.

    2013-01-01

    The objectives of this project were (1) To determine and present from existing data in Texas, relations between observed stream flow, topographic slope, mean section velocity, and other hydraulic factors, to produce charts such as Figure 1 and to produce empirical distributions of the various flow parameters to provide a methodology to "check if model results are way off!"; (2) To produce a statistical regional tool to estimate mean velocity or other selected parameters for storm flows or other conditional discharges at ungauged locations (most bridge crossings) in Texas to provide a secondary way to compare such values to a conventional hydraulic modeling approach. (3.) To present ancillary values such as Froude number, stream power, Rosgen channel classification, sinuosity, and other selected characteristics (readily determinable from existing data) to provide additional information to engineers concerned with the hydraulic-soil-foundation component of transportation infrastructure.

  18. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking.

    Science.gov (United States)

    Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J

    2014-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.

  19. Empiric model for mean generation time adjustment factor for classic point kinetics equations

    Energy Technology Data Exchange (ETDEWEB)

    Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C., E-mail: david.goes@poli.ufrj.br, E-mail: aquilino@lmp.ufrj.br, E-mail: alessandro@con.ufrj.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Departamento de Engenharia Nuclear

    2017-11-01

    Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)

  20. Empiric model for mean generation time adjustment factor for classic point kinetics equations

    International Nuclear Information System (INIS)

    Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C.

    2017-01-01

    Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)

  1. PACE and the Medicare+Choice risk-adjusted payment model.

    Science.gov (United States)

    Temkin-Greener, H; Meiners, M R; Gruenberg, L

    2001-01-01

    This paper investigates the impact of the Medicare principal inpatient diagnostic cost group (PIP-DCG) payment model on the Program of All-Inclusive Care for the Elderly (PACE). Currently, more than 6,000 Medicare beneficiaries who are nursing home certifiable receive care from PACE, a program poised for expansion under the Balanced Budget Act of 1997. Overall, our analysis suggests that the application of the PIP-DCG model to the PACE program would reduce Medicare payments to PACE, on average, by 38%. The PIP-DCG payment model bases its risk adjustment on inpatient diagnoses and does not capture adequately the risk of caring for a population with functional impairments.

  2. Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model

    DEFF Research Database (Denmark)

    Åberg, Andreas; Widd, Anders; Abildskov, Jens

    2016-01-01

    be used directly for accurate full-scale transient simulations. The model was validated against full-scale data with an engine following the European Transient Cycle. The validation showed that the predictive capability for nitrogen oxides (NOx) was satisfactory. After re-estimation of the adsorption...... and desorption parameters with full-scale transient data, the fit for both NOx and NH3-slip was satisfactory....

  3. Mathematical models to predict rheological parameters of lateritic hydromixtures

    OpenAIRE

    Gabriel Hernández-Ramírez; Arístides A. Legrá-Lobaina; Beatriz Ramírez-Serrano; Liudmila Pérez-García

    2017-01-01

    The present work had as objective to establish mathematical models that allow the prognosis of the rheological parameters of the lateritic pulp at concentrations of solids from 35% to 48%, temperature of the preheated hydromixture superior to 82 ° C and number of mineral between 3 and 16. Four samples of lateritic pulp were used in the study at different process locations. The results allowed defining that the plastic properties of the lateritic pulp in the conditions of this study conform to...

  4. Mathematical properties and parameter estimation for transit compartment pharmacodynamic models.

    Science.gov (United States)

    Yates, James W T

    2008-07-03

    One feature of recent research in pharmacodynamic modelling has been the move towards more mechanistically based model structures. However, in all of these models there are common sub-systems, such as feedback loops and time-delays, whose properties and contribution to the model behaviour merit some mathematical analysis. In this paper a common pharmacodynamic model sub-structure is considered: the linear transit compartment. These models have a number of interesting properties as the length of the cascade chain is increased. In the limiting case a pure time-delay is achieved [Milsum, J.H., 1966. Biological Control Systems Analysis. McGraw-Hill Book Company, New York] and the initial behaviour becoming increasingly sensitive to parameter value perturbation. It is also shown that the modelled drug effect is attenuated, though the duration of action is longer. Through this analysis the range of behaviours that such models are capable of reproducing are characterised. The properties of these models and the experimental requirements are discussed in order to highlight how mathematical analysis prior to experimentation can enhance the utility of mathematical modelling.

  5. The relationship between the C-statistic of a risk-adjustment model and the accuracy of hospital report cards: a Monte Carlo Study.

    Science.gov (United States)

    Austin, Peter C; Reeves, Mathew J

    2013-03-01

    Hospital report cards, in which outcomes following the provision of medical or surgical care are compared across health care providers, are being published with increasing frequency. Essential to the production of these reports is risk-adjustment, which allows investigators to account for differences in the distribution of patient illness severity across different hospitals. Logistic regression models are frequently used for risk adjustment in hospital report cards. Many applied researchers use the c-statistic (equivalent to the area under the receiver operating characteristic curve) of the logistic regression model as a measure of the credibility and accuracy of hospital report cards. To determine the relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards. Monte Carlo simulations were used to examine this issue. We examined the influence of 3 factors on the accuracy of hospital report cards: the c-statistic of the logistic regression model used for risk adjustment, the number of hospitals, and the number of patients treated at each hospital. The parameters used to generate the simulated datasets came from analyses of patients hospitalized with a diagnosis of acute myocardial infarction in Ontario, Canada. The c-statistic of the risk-adjustment model had, at most, a very modest impact on the accuracy of hospital report cards, whereas the number of patients treated at each hospital had a much greater impact. The c-statistic of a risk-adjustment model should not be used to assess the accuracy of a hospital report card.

  6. Estimation Parameters And Modelling Zero Inflated Negative Binomial

    Directory of Open Access Journals (Sweden)

    Cindy Cahyaning Astuti

    2016-11-01

    Full Text Available Regression analysis is used to determine relationship between one or several response variable (Y with one or several predictor variables (X. Regression model between predictor variables and the Poisson distributed response variable is called Poisson Regression Model. Since, Poisson Regression requires an equality between mean and variance, it is not appropriate to apply this model on overdispersion (variance is higher than mean. Poisson regression model is commonly used to analyze the count data. On the count data type, it is often to encounteredd some observations that have zero value with large proportion of zero value on the response variable (zero Inflation. Poisson regression can be used to analyze count data but it has not been able to solve problem of excess zero value on the response variable. An alternative model which is more suitable for overdispersion data and can solve the problem of excess zero value on the response variable is Zero Inflated Negative Binomial (ZINB. In this research, ZINB is applied on the case of Tetanus Neonatorum in East Java. The aim of this research is to examine the likelihood function and to form an algorithm to estimate the parameter of ZINB and also applying ZINB model in the case of Tetanus Neonatorum in East Java. Maximum Likelihood Estimation (MLE method is used to estimate the parameter on ZINB and the likelihood function is maximized using Expectation Maximization (EM algorithm. Test results of ZINB regression model showed that the predictor variable have a partial significant effect at negative binomial model is the percentage of pregnant women visits and the percentage of maternal health personnel assisted, while the predictor variables that have a partial significant effect at zero inflation model is the percentage of neonatus visits.

  7. COMPREHENSIVE CHECK MEASUREMENT OF KEY PARAMETERS ON MODEL BELT CONVEYOR

    Directory of Open Access Journals (Sweden)

    Vlastimil MONI

    2013-07-01

    Full Text Available Complex measurements of characteristic parameters realised on a long distance model belt conveyor are described. The main objective was to complete and combine the regular measurements of electric power on drives of belt conveyors operated in Czech opencast mines with measurements of other physical quantities and to gain by this way an image of their mutual relations and relations of quantities derived from them. The paper includes a short description and results of the measurements on an experimental model conveyor with a closed material transport way.

  8. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models

    KAUST Repository

    Kalligiannaki, Evangelia

    2015-01-07

    We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics and finding the optimal parameter set for which the relative entropy rate with respect to the atomistic dynamics is minimized. The minimization problem leads to a generalization of the force matching methods to non equilibrium systems. A multiplicative noise example reveals the importance of the diffusion coefficient in the optimization problem.

  9. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-06

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  10. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-01

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  11. The Impact of Three Factors on the Recovery of Item Parameters for the Three-Parameter Logistic Model

    Science.gov (United States)

    Kim, Kyung Yong; Lee, Won-Chan

    2017-01-01

    This article provides a detailed description of three factors (specification of the ability distribution, numerical integration, and frame of reference for the item parameter estimates) that might affect the item parameter estimation of the three-parameter logistic model, and compares five item calibration methods, which are combinations of the…

  12. Ordinary Mathematical Models in Calculating the Aviation GTE Parameters

    Directory of Open Access Journals (Sweden)

    E. A. Khoreva

    2017-01-01

    Full Text Available The paper presents the analytical review results of the ordinary mathematical models of the operating process used to study aviation GTE parameters and characteristics at all stages of its creation and operation. Considers the mathematical models of the zero and the first level, which are mostly used when solving typical problems in calculating parameters and characteristics of engines.Presents a number of practical problems arising in designing aviation GTE for various applications.The application of mathematical models of the zero-level engine can be quite appropriate when the engine is considered as a component in the aircraft system to estimate its calculated individual flight performance or when modeling the flight cycle of the aircrafts of different purpose.The paper demonstrates that introduction of correction functions into the first-level mathematical models in solving typical problems (influence of the Reynolds number, characteristics deterioration of the units during the overhaul period of engine, as well as influence of the flow inhomogeneity at the inlet because of manufacturing tolerance, etc. enables providing a sufficient engineering estimate accuracy to reflect a realistic operating process in the engine and its elements.

  13. Applicability of genetic algorithms to parameter estimation of economic models

    Directory of Open Access Journals (Sweden)

    Marcel Ševela

    2004-01-01

    Full Text Available The paper concentrates on capability of genetic algorithms for parameter estimation of non-linear economic models. In the paper we test the ability of genetic algorithms to estimate of parameters of demand function for durable goods and simultaneously search for parameters of genetic algorithm that lead to maximum effectiveness of the computation algorithm. The genetic algorithms connect deterministic iterative computation methods with stochastic methods. In the genteic aůgorithm approach each possible solution is represented by one individual, those life and lifes of all generations of individuals run under a few parameter of genetic algorithm. Our simulations resulted in optimal mutation rate of 15% of all bits in chromosomes, optimal elitism rate 20%. We can not set the optimal extend of generation, because it proves positive correlation with effectiveness of genetic algorithm in all range under research, but its impact is degreasing. The used genetic algorithm was sensitive to mutation rate at most, than to extend of generation. The sensitivity to elitism rate is not so strong.

  14. A Review of Distributed Parameter Groundwater Management Modeling Methods

    Science.gov (United States)

    Gorelick, Steven M.

    1983-04-01

    Models which solve the governing groundwater flow or solute transport equations in conjunction with optimization techniques, such as linear and quadratic programing, are powerful aquifer management tools. Groundwater management models fall in two general categories: hydraulics or policy evaluation and water allocation. Groundwater hydraulic management models enable the determination of optimal locations and pumping rates of numerous wells under a variety of restrictions placed upon local drawdown, hydraulic gradients, and water production targets. Groundwater policy evaluation and allocation models can be used to study the influence upon regional groundwater use of institutional policies such as taxes and quotas. Furthermore, fairly complex groundwater-surface water allocation problems can be handled using system decomposition and multilevel optimization. Experience from the few real world applications of groundwater optimization-management techniques is summarized. Classified separately are methods for groundwater quality management aimed at optimal waste disposal in the subsurface. This classification is composed of steady state and transient management models that determine disposal patterns in such a way that water quality is protected at supply locations. Classes of research missing from the literature are groundwater quality management models involving nonlinear constraints, models which join groundwater hydraulic and quality simulations with political-economic management considerations, and management models that include parameter uncertainty.

  15. Parameters of oscillation generation regions in open star cluster models

    Science.gov (United States)

    Danilov, V. M.; Putkov, S. I.

    2017-07-01

    We determine the masses and radii of central regions of open star cluster (OCL) models with small or zero entropy production and estimate the masses of oscillation generation regions in clustermodels based on the data of the phase-space coordinates of stars. The radii of such regions are close to the core radii of the OCL models. We develop a new method for estimating the total OCL masses based on the cluster core mass, the cluster and cluster core radii, and radial distribution of stars. This method yields estimates of dynamical masses of Pleiades, Praesepe, and M67, which agree well with the estimates of the total masses of the corresponding clusters based on proper motions and spectroscopic data for cluster stars.We construct the spectra and dispersion curves of the oscillations of the field of azimuthal velocities v φ in OCL models. Weak, low-amplitude unstable oscillations of v φ develop in cluster models near the cluster core boundary, and weak damped oscillations of v φ often develop at frequencies close to the frequencies of more powerful oscillations, which may reduce the non-stationarity degree in OCL models. We determine the number and parameters of such oscillations near the cores boundaries of cluster models. Such oscillations points to the possible role that gradient instability near the core of cluster models plays in the decrease of the mass of the oscillation generation regions and production of entropy in the cores of OCL models with massive extended cores.

  16. Identifyability measures to select the parameters to be estimated in a solid-state fermentation distributed parameter model.

    Science.gov (United States)

    da Silveira, Christian L; Mazutti, Marcio A; Salau, Nina P G

    2016-07-08

    Process modeling can lead to of advantages such as helping in process control, reducing process costs and product quality improvement. This work proposes a solid-state fermentation distributed parameter model composed by seven differential equations with seventeen parameters to represent the process. Also, parameters estimation with a parameters identifyability analysis (PIA) is performed to build an accurate model with optimum parameters. Statistical tests were made to verify the model accuracy with the estimated parameters considering different assumptions. The results have shown that the model assuming substrate inhibition better represents the process. It was also shown that eight from the seventeen original model parameters were nonidentifiable and better results were obtained with the removal of these parameters from the estimation procedure. Therefore, PIA can be useful to estimation procedure, since it may reduce the number of parameters that can be evaluated. Further, PIA improved the model results, showing to be an important procedure to be taken. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:905-917, 2016. © 2016 American Institute of Chemical Engineers.

  17. A joint logistic regression and covariate-adjusted continuous-time Markov chain model.

    Science.gov (United States)

    Rubin, Maria Laura; Chan, Wenyaw; Yamal, Jose-Miguel; Robertson, Claudia Sue

    2017-12-10

    The use of longitudinal measurements to predict a categorical outcome is an increasingly common goal in research studies. Joint models are commonly used to describe two or more models simultaneously by considering the correlated nature of their outcomes and the random error present in the longitudinal measurements. However, there is limited research on joint models with longitudinal predictors and categorical cross-sectional outcomes. Perhaps the most challenging task is how to model the longitudinal predictor process such that it represents the true biological mechanism that dictates the association with the categorical response. We propose a joint logistic regression and Markov chain model to describe a binary cross-sectional response, where the unobserved transition rates of a two-state continuous-time Markov chain are included as covariates. We use the method of maximum likelihood to estimate the parameters of our model. In a simulation study, coverage probabilities of about 95%, standard deviations close to standard errors, and low biases for the parameter values show that our estimation method is adequate. We apply the proposed joint model to a dataset of patients with traumatic brain injury to describe and predict a 6-month outcome based on physiological data collected post-injury and admission characteristics. Our analysis indicates that the information provided by physiological changes over time may help improve prediction of long-term functional status of these severely ill subjects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  18. A new multivariate zero-adjusted Poisson model with applications to biomedicine.

    Science.gov (United States)

    Liu, Yin; Tian, Guo-Liang; Tang, Man-Lai; Yuen, Kam Chuen

    2018-05-25

    Recently, although advances were made on modeling multivariate count data, existing models really has several limitations: (i) The multivariate Poisson log-normal model (Aitchison and Ho, ) cannot be used to fit multivariate count data with excess zero-vectors; (ii) The multivariate zero-inflated Poisson (ZIP) distribution (Li et al., 1999) cannot be used to model zero-truncated/deflated count data and it is difficult to apply to high-dimensional cases; (iii) The Type I multivariate zero-adjusted Poisson (ZAP) distribution (Tian et al., 2017) could only model multivariate count data with a special correlation structure for random components that are all positive or negative. In this paper, we first introduce a new multivariate ZAP distribution, based on a multivariate Poisson distribution, which allows the correlations between components with a more flexible dependency structure, that is some of the correlation coefficients could be positive while others could be negative. We then develop its important distributional properties, and provide efficient statistical inference methods for multivariate ZAP model with or without covariates. Two real data examples in biomedicine are used to illustrate the proposed methods. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Risk-adjusted capitation funding models for chronic disease in Australia: alternatives to casemix funding.

    Science.gov (United States)

    Antioch, K M; Walsh, M K

    2002-01-01

    Under Australian casemix funding arrangements that use Diagnosis-Related Groups (DRGs) the average price is policy based, not benchmarked. Cost weights are too low for State-wide chronic disease services. Risk-adjusted Capitation Funding Models (RACFM) are feasible alternatives. A RACFM was developed for public patients with cystic fibrosis treated by an Australian Health Maintenance Organization (AHMO). Adverse selection is of limited concern since patients pay solidarity contributions via Medicare levy with no premium contributions to the AHMO. Sponsors paying premium subsidies are the State of Victoria and the Federal Government. Cost per patient is the dependent variable in the multiple regression. Data on DRG 173 (cystic fibrosis) patients were assessed for heteroskedasticity, multicollinearity, structural stability and functional form. Stepwise linear regression excluded non-significant variables. Significant variables were 'emergency' (1276.9), 'outlier' (6377.1), 'complexity' (3043.5), 'procedures' (317.4) and the constant (4492.7) (R(2)=0.21, SE=3598.3, F=14.39, Probpayment (constant). The model explained 21% of the variance in cost per patient. The payment rate is adjusted by a best practice annual admission rate per patient. The model is a blended RACFM for in-patient, out-patient, Hospital In The Home, Fee-For-Service Federal payments for drugs and medical services; lump sum lung transplant payments and risk sharing through cost (loss) outlier payments. State and Federally funded home and palliative services are 'carved out'. The model, which has national application via Coordinated Care Trials and by Australian States for RACFMs may be instructive for Germany, which plans to use Australian DRGs for casemix funding. The capitation alternative for chronic disease can improve equity, allocative efficiency and distributional justice. The use of Diagnostic Cost Groups (DCGs) is a promising alternative classification system for capitation arrangements.

  20. Tension-compression asymmetry modelling: strategies for anisotropy parameters identification.

    Directory of Open Access Journals (Sweden)

    Barros Pedro

    2016-01-01

    Full Text Available This work presents details concerning the strategies and algorithms adopted in the fully implicit FE solver DD3IMP to model the orthotropic behavior of metallic sheets and the procedure for anisotropy parameters identification. The work is focused on the yield criterion developed by Cazacu, Plunkett and Barlat, 2006 [1], which accounts for both tension–compression asymmetry and orthotropic plastic behavior. The anisotropy parameters for a 2090-T3 aluminum alloy are identified accounting, or not, for the tension-compression asymmetry. The numerical simulation of a cup drawing is performed for this material, highlighting the importance of considering tension-compression asymmetry in the prediction of the earing profile, for materials with cubic structure, even if this phenomenon is relatively small.

  1. Parameter Estimation in Stochastic Grey-Box Models

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2004-01-01

    An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation. The estimation scheme is based on the extended...... Kalman filter and features maximum likelihood as well as maximum a posteriori estimation on multiple independent data sets, including irregularly sampled data sets and data sets with occasional outliers and missing observations. The software implementation is compared to an existing software tool...... and proves to have better performance both in terms of quality of estimates for nonlinear systems with significant diffusion and in terms of reproducibility. In particular, the new tool provides more accurate and more consistent estimates of the parameters of the diffusion term....

  2. Modelling Technical and Economic Parameters in Selection of Manufacturing Devices

    Directory of Open Access Journals (Sweden)

    Naqib Daneshjo

    2017-11-01

    Full Text Available Sustainable science and technology development is also conditioned by continuous development of means of production which have a key role in structure of each production system. Mechanical nature of the means of production is complemented by controlling and electronic devices in context of intelligent industry. A selection of production machines for a technological process or technological project has so far been practically resolved, often only intuitively. With regard to increasing intelligence, the number of variable parameters that have to be considered when choosing a production device is also increasing. It is necessary to use computing techniques and decision making methods according to heuristic methods and more precise methodological procedures during the selection. The authors present an innovative model for optimization of technical and economic parameters in the selection of manufacturing devices for industry 4.0.

  3. Automated parameter estimation for biological models using Bayesian statistical model checking.

    Science.gov (United States)

    Hussain, Faraz; Langmead, Christopher J; Mi, Qi; Dutta-Moscato, Joyeeta; Vodovotz, Yoram; Jha, Sumit K

    2015-01-01

    Probabilistic models have gained widespread acceptance in the systems biology community as a useful way to represent complex biological systems. Such models are developed using existing knowledge of the structure and dynamics of the system, experimental observations, and inferences drawn from statistical analysis of empirical data. A key bottleneck in building such models is that some system variables cannot be measured experimentally. These variables are incorporated into the model as numerical parameters. Determining values of these parameters that justify existing experiments and provide reliable predictions when model simulations are performed is a key research problem. Using an agent-based model of the dynamics of acute inflammation, we demonstrate a novel parameter estimation algorithm by discovering the amount and schedule of doses of bacterial lipopolysaccharide that guarantee a set of observed clinical outcomes with high probability. We synthesized values of twenty-eight unknown parameters such that the parameterized model instantiated with these parameter values satisfies four specifications describing the dynamic behavior of the model. We have developed a new algorithmic technique for discovering parameters in complex stochastic models of biological systems given behavioral specifications written in a formal mathematical logic. Our algorithm uses Bayesian model checking, sequential hypothesis testing, and stochastic optimization to automatically synthesize parameters of probabilistic biological models.

  4. Models of quality-adjusted life years when health varies over time

    DEFF Research Database (Denmark)

    Hansen, Kristian Schultz; Østerdal, Lars Peter Raahave

    2006-01-01

    Qualityadjusted life year (QALY) models are widely used for economic evaluation in the health care sector. In the first part of the paper, we establish an overview of QALY models where health varies over time and provide a theoretical analysis of model identification and parameter estimation from...... time tradeoff (TTO) and standard gamble (SG) scores. We investigate deterministic and probabilistic models and consider five different families of discounting functions in all. The second part of the paper discusses four issues recurrently debated in the literature. This discussion includes questioning...... of these two can be used to disentangle risk aversion from discounting. We find that caution must be taken when drawing conclusions from models with chronic health states to situations where health varies over time. One notable difference is that in the former case, risk aversion may be indistinguishable from...

  5. Identification of grid model parameters using synchrophasor measurements

    Energy Technology Data Exchange (ETDEWEB)

    Boicea, Valentin; Albu, Mihaela [Politehnica University of Bucharest (Romania)

    2012-07-01

    Presently a critical element of the energy networks is represented by the active distribution grids, where generation intermittency and controllable loads contribute to a stochastic varability of the quantities characterizing the grid operation. The capability of controlling the electrical energy transfer is also limited by the incomplete knowledge of the detailed electrical model of each of the grid components. Asset management in distribution grids has to consider dynamic loads, while high loading of network sections might already have degraded some of the assets. Moreover, in case of functional microgrids, all elements need to be modelled accurately and an appropriate measurement layer enabling online control needs to be deployed. In this paper a method for online identification of the actual parameter values in grid electrical models is proposed. Laboratory results validating the proposed method are presented. (orig.)

  6. Dynamic systems models new methods of parameter and state estimation

    CERN Document Server

    2016-01-01

    This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...

  7. Improving Risk Adjustment for Mortality After Pediatric Cardiac Surgery: The UK PRAiS2 Model.

    Science.gov (United States)

    Rogers, Libby; Brown, Katherine L; Franklin, Rodney C; Ambler, Gareth; Anderson, David; Barron, David J; Crowe, Sonya; English, Kate; Stickley, John; Tibby, Shane; Tsang, Victor; Utley, Martin; Witter, Thomas; Pagel, Christina

    2017-07-01

    Partial Risk Adjustment in Surgery (PRAiS), a risk model for 30-day mortality after children's heart surgery, has been used by the UK National Congenital Heart Disease Audit to report expected risk-adjusted survival since 2013. This study aimed to improve the model by incorporating additional comorbidity and diagnostic information. The model development dataset was all procedures performed between 2009 and 2014 in all UK and Ireland congenital cardiac centers. The outcome measure was death within each 30-day surgical episode. Model development followed an iterative process of clinical discussion and development and assessment of models using logistic regression under 25 × 5 cross-validation. Performance was measured using Akaike information criterion, the area under the receiver-operating characteristic curve (AUC), and calibration. The final model was assessed in an external 2014 to 2015 validation dataset. The development dataset comprised 21,838 30-day surgical episodes, with 539 deaths (mortality, 2.5%). The validation dataset comprised 4,207 episodes, with 97 deaths (mortality, 2.3%). The updated risk model included 15 procedural, 11 diagnostic, and 4 comorbidity groupings, and nonlinear functions of age and weight. Performance under cross-validation was: median AUC of 0.83 (range, 0.82 to 0.83), median calibration slope and intercept of 0.92 (range, 0.64 to 1.25) and -0.23 (range, -1.08 to 0.85) respectively. In the validation dataset, the AUC was 0.86 (95% confidence interval [CI], 0.82 to 0.89), and the calibration slope and intercept were 1.01 (95% CI, 0.83 to 1.18) and 0.11 (95% CI, -0.45 to 0.67), respectively, showing excellent performance. A more sophisticated PRAiS2 risk model for UK use was developed with additional comorbidity and diagnostic information, alongside age and weight as nonlinear variables. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Relevance of the c-statistic when evaluating risk-adjustment models in surgery.

    Science.gov (United States)

    Merkow, Ryan P; Hall, Bruce L; Cohen, Mark E; Dimick, Justin B; Wang, Edward; Chow, Warren B; Ko, Clifford Y; Bilimoria, Karl Y

    2012-05-01

    The measurement of hospital quality based on outcomes requires risk adjustment. The c-statistic is a popular tool used to judge model performance, but can be limited, particularly when evaluating specific operations in focused populations. Our objectives were to examine the interpretation and relevance of the c-statistic when used in models with increasingly similar case mix and to consider an alternative perspective on model calibration based on a graphical depiction of model fit. From the American College of Surgeons National Surgical Quality Improvement Program (2008-2009), patients were identified who underwent a general surgery procedure, and procedure groups were increasingly restricted: colorectal-all, colorectal-elective cases only, and colorectal-elective cancer cases only. Mortality and serious morbidity outcomes were evaluated using logistic regression-based risk adjustment, and model c-statistics and calibration curves were used to compare model performance. During the study period, 323,427 general, 47,605 colorectal-all, 39,860 colorectal-elective, and 21,680 colorectal cancer patients were studied. Mortality ranged from 1.0% in general surgery to 4.1% in the colorectal-all group, and serious morbidity ranged from 3.9% in general surgery to 12.4% in the colorectal-all procedural group. As case mix was restricted, c-statistics progressively declined from the general to the colorectal cancer surgery cohorts for both mortality and serious morbidity (mortality: 0.949 to 0.866; serious morbidity: 0.861 to 0.668). Calibration was evaluated graphically by examining predicted vs observed number of events over risk deciles. For both mortality and serious morbidity, there was no qualitative difference in calibration identified between the procedure groups. In the present study, we demonstrate how the c-statistic can become less informative and, in certain circumstances, can lead to incorrect model-based conclusions, as case mix is restricted and patients become

  9. Measurement of the Economic Growth and Add-on of the R.M. Solow Adjusted Model

    Directory of Open Access Journals (Sweden)

    Ion Gh. Rosca

    2007-08-01

    Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans model etc., the R.M. Solow model is part of the category which characterizes the economic growth.The paper aim is the economic growth measurement and add-on of the R.M. Solow adjusted model.

  10. A Comparative Study of CAPM and Seven Factors Risk Adjusted Return Model

    Directory of Open Access Journals (Sweden)

    Madiha Riaz Bhatti

    2014-12-01

    Full Text Available This study is a comparison and contrast of the predictive powers of two asset pricing models: CAPM and seven factor risk-return adjusted model, to explain the cross section of stock rate of returns in the financial sector listed at Karachi Stock Exchange (KSE. To test the models daily returns from January 2013 to February 2014 have been taken and the excess returns of portfolios are regressed on explanatory variables. The results of the tested models indicate that the models are valid and applicable in the financial market of Pakistan during the period under study, as the intercepts are not significantly different from zero. It is consequently established from the findings that all the explanatory variables explain the stock returns in the financial sector of KSE. In addition, the results of this study show that addition of more explanatory variables to the single factor CAPM results in reasonably high values of R2. These results provide substantial support to fund managers, investors and financial analysts in making investment decisions.

  11. Optimization of Experimental Model Parameter Identification for Energy Storage Systems

    Directory of Open Access Journals (Sweden)

    Rosario Morello

    2013-09-01

    Full Text Available The smart grid approach is envisioned to take advantage of all available modern technologies in transforming the current power system to provide benefits to all stakeholders in the fields of efficient energy utilisation and of wide integration of renewable sources. Energy storage systems could help to solve some issues that stem from renewable energy usage in terms of stabilizing the intermittent energy production, power quality and power peak mitigation. With the integration of energy storage systems into the smart grids, their accurate modeling becomes a necessity, in order to gain robust real-time control on the network, in terms of stability and energy supply forecasting. In this framework, this paper proposes a procedure to identify the values of the battery model parameters in order to best fit experimental data and integrate it, along with models of energy sources and electrical loads, in a complete framework which represents a real time smart grid management system. The proposed method is based on a hybrid optimisation technique, which makes combined use of a stochastic and a deterministic algorithm, with low computational burden and can therefore be repeated over time in order to account for parameter variations due to the battery’s age and usage.

  12. Bayesian parameter estimation for stochastic models of biological cell migration

    Science.gov (United States)

    Dieterich, Peter; Preuss, Roland

    2013-08-01

    Cell migration plays an essential role under many physiological and patho-physiological conditions. It is of major importance during embryonic development and wound healing. In contrast, it also generates negative effects during inflammation processes, the transmigration of tumors or the formation of metastases. Thus, a reliable quantification and characterization of cell paths could give insight into the dynamics of these processes. Typically stochastic models are applied where parameters are extracted by fitting models to the so-called mean square displacement of the observed cell group. We show that this approach has several disadvantages and problems. Therefore, we propose a simple procedure directly relying on the positions of the cell's trajectory and the covariance matrix of the positions. It is shown that the covariance is identical with the spatial aging correlation function for the supposed linear Gaussian models of Brownian motion with drift and fractional Brownian motion. The technique is applied and illustrated with simulated data showing a reliable parameter estimation from single cell paths.

  13. SU-E-T-247: Multi-Leaf Collimator Model Adjustments Improve Small Field Dosimetry in VMAT Plans

    Energy Technology Data Exchange (ETDEWEB)

    Young, L; Yang, F [University of Washington, Seattle, WA (United States)

    2014-06-01

    Purpose: The Elekta beam modulator linac employs a 4-mm micro multileaf collimator (MLC) backed by a fixed jaw. Out-of-field dose discrepancies between treatment planning system (TPS) calculations and output water phantom measurements are caused by the 1-mm leaf gap required for all moving MLCs in a VMAT arc. In this study, MLC parameters are optimized to improve TPS out-of-field dose approximations. Methods: Static 2.4 cm square fields were created with a 1-mm leaf gap for MLCs that would normally park behind the jaw. Doses in the open field and leaf gap were measured with an A16 micro ion chamber and EDR2 film for comparison with corresponding point doses in the Pinnacle TPS. The MLC offset table and tip radius were adjusted until TPS point doses agreed with photon measurements. Improvements to the beam models were tested using static arcs consisting of square fields ranging from 1.6 to 14.0 cm, with 45° collimator rotation, and 1-mm leaf gap to replicate VMAT conditions. Gamma values for the 3-mm distance, 3% dose difference criteria were evaluated using standard QA procedures with a cylindrical detector array. Results: The best agreement in point doses within the leaf gap and open field was achieved by offsetting the default rounded leaf end table by 0.1 cm and adjusting the leaf tip radius to 13 cm. Improvements in TPS models for 6 and 10 MV photon beams were more significant for smaller field sizes 3.6 cm or less where the initial gamma factors progressively increased as field size decreased, i.e. for a 1.6cm field size, the Gamma increased from 56.1% to 98.8%. Conclusion: The MLC optimization techniques developed will achieve greater dosimetric accuracy in small field VMAT treatment plans for fixed jaw linear accelerators. Accurate predictions of dose to organs at risk may reduce adverse effects of radiotherapy.

  14. Model parameters for representative wetland plant functional groups

    Science.gov (United States)

    Williams, Amber S.; Kiniry, James R.; Mushet, David M.; Smith, Loren M.; McMurry, Scott T.; Attebury, Kelly; Lang, Megan; McCarty, Gregory W.; Shaffer, Jill A.; Effland, William R.; Johnson, Mari-Vaughn V.

    2017-01-01

    Wetlands provide a wide variety of ecosystem services including water quality remediation, biodiversity refugia, groundwater recharge, and floodwater storage. Realistic estimation of ecosystem service benefits associated with wetlands requires reasonable simulation of the hydrology of each site and realistic simulation of the upland and wetland plant growth cycles. Objectives of this study were to quantify leaf area index (LAI), light extinction coefficient (k), and plant nitrogen (N), phosphorus (P), and potassium (K) concentrations in natural stands of representative plant species for some major plant functional groups in the United States. Functional groups in this study were based on these parameters and plant growth types to enable process-based modeling. We collected data at four locations representing some of the main wetland regions of the United States. At each site, we collected on-the-ground measurements of fraction of light intercepted, LAI, and dry matter within the 2013–2015 growing seasons. Maximum LAI and k variables showed noticeable variations among sites and years, while overall averages and functional group averages give useful estimates for multisite simulation modeling. Variation within each species gives an indication of what can be expected in such natural ecosystems. For P and K, the concentrations from highest to lowest were spikerush (Eleocharis macrostachya), reed canary grass (Phalaris arundinacea), smartweed (Polygonum spp.), cattail (Typha spp.), and hardstem bulrush (Schoenoplectus acutus). Spikerush had the highest N concentration, followed by smartweed, bulrush, reed canary grass, and then cattail. These parameters will be useful for the actual wetland species measured and for the wetland plant functional groups they represent. These parameters and the associated process-based models offer promise as valuable tools for evaluating environmental benefits of wetlands and for evaluating impacts of various agronomic practices in

  15. Application of a free parameter model to plastic scintillation samples

    Energy Technology Data Exchange (ETDEWEB)

    Tarancon Sanz, Alex, E-mail: alex.tarancon@ub.edu [Departament de Quimica Analitica, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain); Kossert, Karsten, E-mail: Karsten.Kossert@ptb.de [Physikalisch-Technische Bundesanstalt (PTB), Bundesallee 100, 38116 Braunschweig (Germany)

    2011-08-21

    In liquid scintillation (LS) counting, the CIEMAT/NIST efficiency tracing method and the triple-to-double coincidence ratio (TDCR) method have proved their worth for reliable activity measurements of a number of radionuclides. In this paper, an extended approach to apply a free-parameter model to samples containing a mixture of solid plastic scintillation microspheres and radioactive aqueous solutions is presented. Several beta-emitting radionuclides were measured in a TDCR system at PTB. For the application of the free parameter model, the energy loss in the aqueous phase must be taken into account, since this portion of the particle energy does not contribute to the creation of scintillation light. The energy deposit in the aqueous phase is determined by means of Monte Carlo calculations applying the PENELOPE software package. To this end, great efforts were made to model the geometry of the samples. Finally, a new geometry parameter was defined, which was determined by means of a tracer radionuclide with known activity. This makes the analysis of experimental TDCR data of other radionuclides possible. The deviations between the determined activity concentrations and reference values were found to be lower than 3%. The outcome of this research work is also important for a better understanding of liquid scintillation counting. In particular the influence of (inverse) micelles, i.e. the aqueous spaces embedded in the organic scintillation cocktail, can be investigated. The new approach makes clear that it is important to take the energy loss in the aqueous phase into account. In particular for radionuclides emitting low-energy electrons (e.g. M-Auger electrons from {sup 125}I), this effect can be very important.

  16. Microbial Communities Model Parameter Calculation for TSPA/SR

    International Nuclear Information System (INIS)

    D. Jolley

    2001-01-01

    This calculation has several purposes. First the calculation reduces the information contained in ''Committed Materials in Repository Drifts'' (BSC 2001a) to useable parameters required as input to MING V1.O (CRWMS M and O 1998, CSCI 30018 V1.O) for calculation of the effects of potential in-drift microbial communities as part of the microbial communities model. The calculation is intended to replace the parameters found in Attachment II of the current In-Drift Microbial Communities Model revision (CRWMS M and O 2000c) with the exception of Section 11-5.3. Second, this calculation provides the information necessary to supercede the following DTN: M09909SPAMING1.003 and replace it with a new qualified dataset (see Table 6.2-1). The purpose of this calculation is to create the revised qualified parameter input for MING that will allow ΔG (Gibbs Free Energy) to be corrected for long-term changes to the temperature of the near-field environment. Calculated herein are the quadratic or second order regression relationships that are used in the energy limiting calculations to potential growth of microbial communities in the in-drift geochemical environment. Third, the calculation performs an impact review of a new DTN: M00012MAJIONIS.000 that is intended to replace the currently cited DTN: GS9809083 12322.008 for water chemistry data used in the current ''In-Drift Microbial Communities Model'' revision (CRWMS M and O 2000c). Finally, the calculation updates the material lifetimes reported on Table 32 in section 6.5.2.3 of the ''In-Drift Microbial Communities'' AMR (CRWMS M and O 2000c) based on the inputs reported in BSC (2001a). Changes include adding new specified materials and updating old materials information that has changed

  17. Adjustment of automatic control systems of production facilities at coal processing plants using multivariant physico- mathematical models

    Science.gov (United States)

    Evtushenko, V. F.; Myshlyaev, L. P.; Makarov, G. V.; Ivushkin, K. A.; Burkova, E. V.

    2016-10-01

    The structure of multi-variant physical and mathematical models of control system is offered as well as its application for adjustment of automatic control system (ACS) of production facilities on the example of coal processing plant.

  18. An Analysis of Missile Systems Cost Growth and Implementation of Acquisition Reform Initiatives Using a Hybrid Adjusted Cost Growth Model

    National Research Council Canada - National Science Library

    Abate, Christopher

    2004-01-01

    ...) data with a hybrid adjusted cost growth (ACG) model. In addition, an analysis of acquisition reform initiatives during the treatment period was conducted to determine if reform efforts impacted missile system cost growth. A pre-reform...

  19. Lumped-parameter fuel rod model for rapid thermal transients

    International Nuclear Information System (INIS)

    Perkins, K.R.; Ramshaw, J.D.

    1975-07-01

    The thermal behavior of fuel rods during simulated accident conditions is extremely sensitive to the heat transfer coefficient which is, in turn, very sensitive to the cladding surface temperature and the fluid conditions. The development of a semianalytical, lumped-parameter fuel rod model which is intended to provide accurate calculations, in a minimum amount of computer time, of the thermal response of fuel rods during a simulated loss-of-coolant accident is described. The results show good agreement with calculations from a comprehensive fuel-rod code (FRAP-T) currently in use at Aerojet Nuclear Company

  20. Model atmospheres and parameters of central stars of planetary nebulae

    International Nuclear Information System (INIS)

    Patriarchi, P.; Cerruti-sola, M.; Perinotto, M.

    1989-01-01

    Non-LTE hydrogen and helium model atmospheres have been obtained for temperatures and gravities relevant to the central stars of planetary nebulae. Low-resolution and high-resolution observations obtained by the IUE satellite have been used along with optical data to determine Zanstra temperatures of the central stars of NGC 1535, NGC 6210, NGC 7009, IC 418, and IC 4593. Comparison of the observed stellar continuum of these stars with theoretical results allowed further information on the stellar temperature to be derived. The final temperatures are used to calculate accurate stellar parameters. 62 refs

  1. Modelled basic parameters for semi-industrial irradiation plant design

    International Nuclear Information System (INIS)

    Mangussi, J.

    2009-01-01

    The basic parameters of an irradiation plant design are the total activity, the product uniformity ratio and the efficiency process. The target density, the minimum dose required and the throughput depends on the use to which the irradiator will be put at. In this work, a model for calculating the specific dose rate at several depths in an infinite homogeneous medium produced by a slab source irradiator is presented. The product minimum dose rate for a set of target thickness is obtained. The design method steps are detailed and an illustrative example is presented. (author)

  2. Parameter Identification for Nonlinear Circuit Models of Power BAW Resonator

    Directory of Open Access Journals (Sweden)

    CONSTANTINESCU, F.

    2011-02-01

    Full Text Available The large signal operation of the bulk acoustic wave (BAW resonators is characterized by the amplitude-frequency effect and the intermodulation effect. The measurement of these effects, together with that of the small signal frequency characteristic, are used in this paper for the parameter identification of the nonlinear circuit models developed previously by authors. As the resonator has been connected to the measurement bench by wire bonding, the parasitic elements of this connection have been taken into account, being estimated solving some electrical and magnetic field problems.

  3. The New York Sepsis Severity Score: Development of a Risk-Adjusted Severity Model for Sepsis.

    Science.gov (United States)

    Phillips, Gary S; Osborn, Tiffany M; Terry, Kathleen M; Gesten, Foster; Levy, Mitchell M; Lemeshow, Stanley

    2018-05-01

    In accordance with Rory's Regulations, hospitals across New York State developed and implemented protocols for sepsis recognition and treatment to reduce variations in evidence informed care and preventable mortality. The New York Department of Health sought to develop a risk assessment model for accurate and standardized hospital mortality comparisons of adult septic patients across institutions using case-mix adjustment. Retrospective evaluation of prospectively collected data. Data from 43,204 severe sepsis and septic shock patients from 179 hospitals across New York State were evaluated. Prospective data were submitted to a database from January 1, 2015, to December 31, 2015. None. Maximum likelihood logistic regression was used to estimate model coefficients used in the New York State risk model. The mortality probability was estimated using a logistic regression model. Variables to be included in the model were determined as part of the model-building process. Interactions between variables were included if they made clinical sense and if their p values were less than 0.05. Model development used a random sample of 90% of available patients and was validated using the remaining 10%. Hosmer-Lemeshow goodness of fit p values were considerably greater than 0.05, suggesting good calibration. Areas under the receiver operator curve in the developmental and validation subsets were 0.770 (95% CI, 0.765-0.775) and 0.773 (95% CI, 0.758-0.787), respectively, indicating good discrimination. Development and validation datasets had similar distributions of estimated mortality probabilities. Mortality increased with rising age, comorbidities, and lactate. The New York Sepsis Severity Score accurately estimated the probability of hospital mortality in severe sepsis and septic shock patients. It performed well with respect to calibration and discrimination. This sepsis-specific model provides an accurate, comprehensive method for standardized mortality comparison of adult

  4. Empirically modelled Pc3 activity based on solar wind parameters

    Directory of Open Access Journals (Sweden)

    B. Heilig

    2010-09-01

    Full Text Available It is known that under certain solar wind (SW/interplanetary magnetic field (IMF conditions (e.g. high SW speed, low cone angle the occurrence of ground-level Pc3–4 pulsations is more likely. In this paper we demonstrate that in the event of anomalously low SW particle density, Pc3 activity is extremely low regardless of otherwise favourable SW speed and cone angle. We re-investigate the SW control of Pc3 pulsation activity through a statistical analysis and two empirical models with emphasis on the influence of SW density on Pc3 activity. We utilise SW and IMF measurements from the OMNI project and ground-based magnetometer measurements from the MM100 array to relate SW and IMF measurements to the occurrence of Pc3 activity. Multiple linear regression and artificial neural network models are used in iterative processes in order to identify sets of SW-based input parameters, which optimally reproduce a set of Pc3 activity data. The inclusion of SW density in the parameter set significantly improves the models. Not only the density itself, but other density related parameters, such as the dynamic pressure of the SW, or the standoff distance of the magnetopause work equally well in the model. The disappearance of Pc3s during low-density events can have at least four reasons according to the existing upstream wave theory: 1. Pausing the ion-cyclotron resonance that generates the upstream ultra low frequency waves in the absence of protons, 2. Weakening of the bow shock that implies less efficient reflection, 3. The SW becomes sub-Alfvénic and hence it is not able to sweep back the waves propagating upstream with the Alfvén-speed, and 4. The increase of the standoff distance of the magnetopause (and of the bow shock. Although the models cannot account for the lack of Pc3s during intervals when the SW density is extremely low, the resulting sets of optimal model inputs support the generation of mid latitude Pc3 activity predominantly through

  5. Modelling of bio-optical parameters of open ocean waters

    Directory of Open Access Journals (Sweden)

    Vadim N. Pelevin

    2001-12-01

    Full Text Available An original method for estimating the concentration of chlorophyll pigments, absorption of yellow substance and absorption of suspended matter without pigments and yellow substance in detritus using spectral diffuse attenuation coefficient for downwelling irradiance and irradiance reflectance data has been applied to sea waters of different types in the open ocean (case 1. Using the effective numerical single parameter classification with the water type optical index m as a parameter over the whole range of the open ocean waters, the calculations have been carried out and the light absorption spectra of sea waters tabulated. These spectra are used to optimize the absorption models and thus to estimate the concentrations of the main admixtures in sea water. The value of m can be determined from direct measurements of the downward irradiance attenuation coefficient at 500 nm or calculated from remote sensing data using the regressions given in the article. The sea water composition can then be readily estimated from the tables given for any open ocean area if that one parameter m characterizing the basin is known.

  6. Application of regression model on stream water quality parameters

    International Nuclear Information System (INIS)

    Suleman, M.; Maqbool, F.; Malik, A.H.; Bhatti, Z.A.

    2012-01-01

    Statistical analysis was conducted to evaluate the effect of solid waste leachate from the open solid waste dumping site of Salhad on the stream water quality. Five sites were selected along the stream. Two sites were selected prior to mixing of leachate with the surface water. One was of leachate and other two sites were affected with leachate. Samples were analyzed for pH, water temperature, electrical conductivity (EC), total dissolved solids (TDS), Biological oxygen demand (BOD), chemical oxygen demand (COD), dissolved oxygen (DO) and total bacterial load (TBL). In this study correlation coefficient r among different water quality parameters of various sites were calculated by using Pearson model and then average of each correlation between two parameters were also calculated, which shows TDS and EC and pH and BOD have significantly increasing r value, while temperature and TDS, temp and EC, DO and BL, DO and COD have decreasing r value. Single factor ANOVA at 5% level of significance was used which shows EC, TDS, TCL and COD were significantly differ among various sites. By the application of these two statistical approaches TDS and EC shows strongly positive correlation because the ions from the dissolved solids in water influence the ability of that water to conduct an electrical current. These two parameters significantly vary among 5 sites which are further confirmed by using linear regression. (author)

  7. Parameter identification and global sensitivity analysis of Xin'anjiang model using meta-modeling approach

    Directory of Open Access Journals (Sweden)

    Xiao-meng Song

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters' sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  8. ELECTRICAL CONDUCTIVITY OF SOYBEAN SEED CULTIVARS AND ADJUSTED MODELS OF LEAKAGE CURVES ALONG THE TIME

    Directory of Open Access Journals (Sweden)

    ADRIANA RITA SALINAS

    2010-01-01

    Full Text Available The objective of this work was to study the behavior of ten soybean [Glycine max (L. Merr.] cultivars using the electrical conductivity (EC test by the comparison of curves of the accumulative electrolyte leakage along the time and to establish the statistical model that allow the best adjust of the curves. Ten soybean cultivars were used and they were mechanically harvested in 2004 in the EEA Oliveros, Santa Fe, Argentina. Measurements of EC were made for 100 individual seeds of each cultivar during 20 hours of immersion at intervals of 1 hour using an equipment that permit an individual seed analysis (Seed Automatic Analyzer SAD 9000S. There were proposed two statistical models to study the EC along the time of the 10 cultivars studied using SAS Statistics Program, to select the model that better allow us to understand the EC behavior along the time. Model 1 allowed to make comparisons of EC along the time between cultivars and to study the influence of the production environment on the physiological quality of soybean seeds. The time to reach the stabilization of the EC must not be lower than 19 hours for the different cultivars.

  9. Family caregiver adjustment and stroke survivor impairment: A path analytic model.

    Science.gov (United States)

    Pendergrass, Anna; Hautzinger, Martin; Elliott, Timothy R; Schilling, Oliver; Becker, Clemens; Pfeiffer, Klaus

    2017-05-01

    Depressive symptoms are a common problem among family caregivers of stroke survivors. The purpose of this study was to examine the association between care recipient's impairment and caregiver depression, and determine the possible mediating effects of caregiver negative problem-orientation, mastery, and leisure time satisfaction. The evaluated model was derived from Pearlin's stress process model of caregiver adjustment. We analyzed baseline data from 122 strained family members who were assisting stroke survivors in Germany for a minimum of 6 months and who consented to participate in a randomized clinical trial. Depressive symptoms were measured with the Center for Epidemiological Studies Depression Scale. The cross-sectional data were analyzed using path analysis. The results show an adequate fit of the model to the data, χ2(1, N = 122) = 0.17, p = .68; comparative fit index = 1.00; root mean square error of approximation: p caregiver depressive symptoms. Results indicate that caregivers at risk for depression reported a negative problem orientation, low caregiving mastery, and low leisure time satisfaction. The situation is particularly affected by the frequency of stroke survivor problematic behavior, and by the degree of their impairments in activities of daily living. The findings provide empirical support for the Pearlin's stress model and emphasize how important it is to target these mediators in health promotion interventions for family caregivers of stroke survivors. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development.

    Science.gov (United States)

    Tøndel, Kristin; Niederer, Steven A; Land, Sander; Smith, Nicolas P

    2014-05-20

    Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input-output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on

  11. Hybrid artificial bee colony algorithm for parameter optimization of five-parameter bidirectional reflectance distribution function model.

    Science.gov (United States)

    Wang, Qianqian; Zhao, Jing; Gong, Yong; Hao, Qun; Peng, Zhong

    2017-11-20

    A hybrid artificial bee colony (ABC) algorithm inspired by the best-so-far solution and bacterial chemotaxis was introduced to optimize the parameters of the five-parameter bidirectional reflectance distribution function (BRDF) model. To verify the performance of the hybrid ABC algorithm, we measured BRDF of three kinds of samples and simulated the undetermined parameters of the five-parameter BRDF model using the hybrid ABC algorithm and the genetic algorithm, respectively. The experimental results demonstrate that the hybrid ABC algorithm outperforms the genetic algorithm in convergence speed, accuracy, and time efficiency under the same conditions.

  12. Optimal Scheme Selection of Agricultural Production Structure Adjustment - Based on DEA Model; Punjab (Pakistan)

    Institute of Scientific and Technical Information of China (English)

    Zeeshan Ahmad; Meng Jun; Muhammad Abdullah; Mazhar Nadeem Ishaq; Majid Lateef; Imran Khan

    2015-01-01

    This paper used the modern evaluation method of DEA (Data Envelopment Analysis) to assess the comparative efficiency and then on the basis of this among multiple schemes chose the optimal scheme of agricultural production structure adjustment. Based on the results of DEA model, we dissected scale advantages of each discretionary scheme or plan. We examined scale advantages of each discretionary scheme, tested profoundly a definitive purpose behind not-DEA efficient, which elucidated the system and methodology to enhance these discretionary plans. At the end, another method had been proposed to rank and select the optimal scheme. The research was important to guide the practice if the modification of agricultural production industrial structure was carried on.

  13. Adjusting for Confounding in Early Postlaunch Settings: Going Beyond Logistic Regression Models.

    Science.gov (United States)

    Schmidt, Amand F; Klungel, Olaf H; Groenwold, Rolf H H

    2016-01-01

    Postlaunch data on medical treatments can be analyzed to explore adverse events or relative effectiveness in real-life settings. These analyses are often complicated by the number of potential confounders and the possibility of model misspecification. We conducted a simulation study to compare the performance of logistic regression, propensity score, disease risk score, and stabilized inverse probability weighting methods to adjust for confounding. Model misspecification was induced in the independent derivation dataset. We evaluated performance using relative bias confidence interval coverage of the true effect, among other metrics. At low events per coefficient (1.0 and 0.5), the logistic regression estimates had a large relative bias (greater than -100%). Bias of the disease risk score estimates was at most 13.48% and 18.83%. For the propensity score model, this was 8.74% and >100%, respectively. At events per coefficient of 1.0 and 0.5, inverse probability weighting frequently failed or reduced to a crude regression, resulting in biases of -8.49% and 24.55%. Coverage of logistic regression estimates became less than the nominal level at events per coefficient ≤5. For the disease risk score, inverse probability weighting, and propensity score, coverage became less than nominal at events per coefficient ≤2.5, ≤1.0, and ≤1.0, respectively. Bias of misspecified disease risk score models was 16.55%. In settings with low events/exposed subjects per coefficient, disease risk score methods can be useful alternatives to logistic regression models, especially when propensity score models cannot be used. Despite better performance of disease risk score methods than logistic regression and propensity score models in small events per coefficient settings, bias, and coverage still deviated from nominal.

  14. Convergence of surface diffusion parameters with model crystal size

    Science.gov (United States)

    Cohen, Jennifer M.; Voter, Arthur F.

    1994-07-01

    A study of the variation in the calculated quantities for adatom diffusion with respect to the size of the model crystal is presented. The reported quantities include surface diffusion barrier heights, pre-exponential factors, and dynamical correction factors. Embedded atom method (EAM) potentials were used throughout this effort. Both the layer size and the depth of the crystal were found to influence the values of the Arrhenius factors significantly. In particular, exchange type mechanisms required a significantly larger model than standard hopping mechanisms to determine adatom diffusion barriers of equivalent accuracy. The dynamical events that govern the corrections to transition state theory (TST) did not appear to be as sensitive to crystal depth. Suitable criteria for the convergence of the diffusion parameters with regard to the rate properties are illustrated.

  15. A distributed parameter wire model for transient electrical discharges

    International Nuclear Information System (INIS)

    Maier, W.B. II; Kadish, A.; Sutherland, C.D.; Robiscoe, R.T.

    1990-01-01

    A model for freely propagating transient electrical discharges, such as lightning and punch-through arcs, is developed in this paper. We describe the electromagnetic fields by Maxwell's equations and we represent the interaction of electric fields with the medium to produce current by ∂J/∂t=ω 2 (E-E*J)/4π, where ω and E* are parameters characteristic of the medium, J≡current density, and J≡J/|J|. We illustrate the properties of this model for small-diameter, guided, cylindrically symmetric discharges. Analytic, numerical, and approximate solutions are given for special cases. The model describes, in a new and comprehensive fashion, certain macroscopic discharge properties, such as threshold behavior, quenching and reignition, path tortuosity, discharge termination with nonzero charge density remaining along the discharge path, and other experimentally observed discharge phenomena. Fields, current densities, and charge densities are quantitatively determined from given boundary and initial conditions. We suggest that many macroscopic discharge properties are properly explained by the model as electromagnetic phenomena, and we discuss extensions of the model to include chemistry, principally ionization and recombination

  16. Diabatic models with transferrable parameters for generalized chemical reactions

    International Nuclear Information System (INIS)

    Reimers, Jeffrey R; McKemmish, Laura K; McKenzie, Ross H; Hush, Noel S

    2017-01-01

    Diabatic models applied to adiabatic electron-transfer theory yield many equations involving just a few parameters that connect ground-state geometries and vibration frequencies to excited-state transition energies and vibration frequencies to the rate constants for electron-transfer reactions, utilizing properties of the conical-intersection seam linking the ground and excited states through the Pseudo Jahn-Teller effect. We review how such simplicity in basic understanding can also be obtained for general chemical reactions. The key feature that must be recognized is that electron-transfer (or hole transfer) processes typically involve one electron (hole) moving between two orbitals, whereas general reactions typically involve two electrons or even four electrons for processes in aromatic molecules. Each additional moving electron leads to new high-energy but interrelated conical-intersection seams that distort the shape of the critical lowest-energy seam. Recognizing this feature shows how conical-intersection descriptors can be transferred between systems, and how general chemical reactions can be compared using the same set of simple parameters. Mathematical relationships are presented depicting how different conical-intersection seams relate to each other, showing that complex problems can be reduced into an effective interaction between the ground-state and a critical excited state to provide the first semi-quantitative implementation of Shaik’s “twin state” concept. Applications are made (i) demonstrating why the chemistry of the first-row elements is qualitatively so different to that of the second and later rows, (ii) deducing the bond-length alternation in hypothetical cyclohexatriene from the observed UV spectroscopy of benzene, (iii) demonstrating that commonly used procedures for modelling surface hopping based on inclusion of only the first-derivative correction to the Born-Oppenheimer approximation are valid in no region of the chemical

  17. Standard model parameters and the search for new physics

    International Nuclear Information System (INIS)

    Marciano, W.J.

    1988-04-01

    In these lectures, my aim is to present an up-to-date status report on the standard model and some key tests of electroweak unification. Within that context, I also discuss how and where hints of new physics may emerge. To accomplish those goals, I have organized my presentation as follows: I discuss the standard model parameters with particular emphasis on the gauge coupling constants and vector boson masses. Examples of new physics appendages are also briefly commented on. In addition, because these lectures are intended for students and thus somewhat pedagogical, I have included an appendix on dimensional regularization and a simple computational example that employs that technique. Next, I focus on weak charged current phenomenology. Precision tests of the standard model are described and up-to-date values for the Cabibbo-Kobayashi-Maskawa (CKM) mixing matrix parameters are presented. Constraints implied by those tests for a 4th generation, supersymmetry, extra Z/prime/ bosons, and compositeness are also discussed. I discuss weak neutral current phenomenology and the extraction of sin/sup 2/ /theta//sub W/ from experiment. The results presented there are based on a recently completed global analysis of all existing data. I have chosen to concentrate that discussion on radiative corrections, the effect of a heavy top quark mass, and implications for grand unified theories (GUTS). The potential for further experimental progress is also commented on. I depart from the narrowest version of the standard model and discuss effects of neutrino masses and mixings. I have chosen to concentrate on oscillations, the Mikheyev-Smirnov- Wolfenstein (MSW) effect, and electromagnetic properties of neutrinos. On the latter topic, I will describe some recent work on resonant spin-flavor precession. Finally, I conclude with a prospectus on hopes for the future. 76 refs

  18. A Proportional Hazards Regression Model for the Subdistribution with Covariates-adjusted Censoring Weight for Competing Risks Data

    DEFF Research Database (Denmark)

    He, Peng; Eriksson, Frank; Scheike, Thomas H.

    2016-01-01

    function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight......With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution...... and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight...

  19. Short term load forecasting technique based on the seasonal exponential adjustment method and the regression model

    International Nuclear Information System (INIS)

    Wu, Jie; Wang, Jianzhou; Lu, Haiyan; Dong, Yao; Lu, Xiaoxiao

    2013-01-01

    Highlights: ► The seasonal and trend items of the data series are forecasted separately. ► Seasonal item in the data series is verified by the Kendall τ correlation testing. ► Different regression models are applied to the trend item forecasting. ► We examine the superiority of the combined models by the quartile value comparison. ► Paired-sample T test is utilized to confirm the superiority of the combined models. - Abstract: For an energy-limited economy system, it is crucial to forecast load demand accurately. This paper devotes to 1-week-ahead daily load forecasting approach in which load demand series are predicted by employing the information of days before being similar to that of the forecast day. As well as in many nonlinear systems, seasonal item and trend item are coexisting in load demand datasets. In this paper, the existing of the seasonal item in the load demand data series is firstly verified according to the Kendall τ correlation testing method. Then in the belief of the separate forecasting to the seasonal item and the trend item would improve the forecasting accuracy, hybrid models by combining seasonal exponential adjustment method (SEAM) with the regression methods are proposed in this paper, where SEAM and the regression models are employed to seasonal and trend items forecasting respectively. Comparisons of the quartile values as well as the mean absolute percentage error values demonstrate this forecasting technique can significantly improve the accuracy though models applied to the trend item forecasting are eleven different ones. This superior performance of this separate forecasting technique is further confirmed by the paired-sample T tests

  20. Inverse modeling of hydrologic parameters using surface flux and runoff observations in the Community Land Model

    Science.gov (United States)

    Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. Ruby

    2013-12-01

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Both deterministic least-square fitting and stochastic Markov-chain Monte Carlo (MCMC)-Bayesian inversion approaches are evaluated by applying them to CLM4 at selected sites with different climate and soil conditions. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the sampling-based stochastic inversion approaches provides significant improvements in the model simulations compared to using default CLM4 parameter values, and that as more information comes in, the predictive intervals (ranges of posterior distributions) of the calibrated parameters become narrower. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.

  1. Monitoring risk-adjusted outcomes in congenital heart surgery: does the appropriateness of a risk model change with time?

    Science.gov (United States)

    Tsang, Victor T; Brown, Katherine L; Synnergren, Mats Johanssen; Kang, Nicholas; de Leval, Marc R; Gallivan, Steve; Utley, Martin

    2009-02-01

    Risk adjustment of outcomes in pediatric congenital heart surgery is challenging due to the great diversity in diagnoses and procedures. We have previously shown that variable life-adjusted display (VLAD) charts provide an effective graphic display of risk-adjusted outcomes in this specialty. A question arises as to whether the risk model used remains appropriate over time. We used a recently developed graphic technique to evaluate the performance of an existing risk model among those patients at a single center during 2000 to 2003 originally used in model development. We then compared the distribution of predicted risk among these patients with that among patients in 2004 to 2006. Finally, we constructed a VLAD chart of risk-adjusted outcomes for the latter period. Among 1083 patients between April 2000 and March 2003, the risk model performed well at predicted risks above 3%, underestimated mortality at 2% to 3% predicted risk, and overestimated mortality below 2% predicted risk. There was little difference in the distribution of predicted risk among these patients and among 903 patients between June 2004 and October 2006. Outcomes for the more recent period were appreciably better than those expected according to the risk model. This finding cannot be explained by any apparent bias in the risk model combined with changes in case-mix. Risk models can, and hopefully do, become out of date. There is scope for complacency in the risk-adjusted audit if the risk model used is not regularly recalibrated to reflect changing standards and expectations.

  2. Modeling and identification for the adjustable control of generation processes; Modelado e identificacion para el control autoajustable de procesos de generacion

    Energy Technology Data Exchange (ETDEWEB)

    Ricano Castillo, Juan Manuel; Palomares Gonzalez, Daniel [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)

    1990-12-31

    The recursive technique of the method of minimum squares is employed to obtain a multivariable model of the self regressive mobile mean type, needed for the design of a multivariable, self-adjustable controller self adjustable multivariable. In this article the employed technique and the results obtained are described with the characterization of the model structure and the parametric estimation. The convergency velocity curves are observed towards the parameters` numerical values. [Espanol] La tecnica recursiva del metodo de los minimos cuadrados se emplea para obtener un modelo multivariable de tipo autorregresivo de promedio movil, necesario para el diseno de un controlador autoajustable muitivariable. En el articulo, se describe la tecnica empleada y los resultados obtenidos con la caracterizacion de la estructura del modelo y la estimacion parametrica. Se observan las curvas de la velocidad de convergencia hacia los valores numericos de los parametros.

  3. Modeling and identification for the adjustable control of generation processes; Modelado e identificacion para el control autoajustable de procesos de generacion

    Energy Technology Data Exchange (ETDEWEB)

    Ricano Castillo, Juan Manuel; Palomares Gonzalez, Daniel [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)

    1989-12-31

    The recursive technique of the method of minimum squares is employed to obtain a multivariable model of the self regressive mobile mean type, needed for the design of a multivariable, self-adjustable controller self adjustable multivariable. In this article the employed technique and the results obtained are described with the characterization of the model structure and the parametric estimation. The convergency velocity curves are observed towards the parameters` numerical values. [Espanol] La tecnica recursiva del metodo de los minimos cuadrados se emplea para obtener un modelo multivariable de tipo autorregresivo de promedio movil, necesario para el diseno de un controlador autoajustable muitivariable. En el articulo, se describe la tecnica empleada y los resultados obtenidos con la caracterizacion de la estructura del modelo y la estimacion parametrica. Se observan las curvas de la velocidad de convergencia hacia los valores numericos de los parametros.

  4. Performance Analysis of Different NeQuick Ionospheric Model Parameters

    Directory of Open Access Journals (Sweden)

    WANG Ningbo

    2017-04-01

    Full Text Available Galileo adopts NeQuick model for single-frequency ionospheric delay corrections. For the standard operation of Galileo, NeQuick model is driven by the effective ionization level parameter Az instead of the solar activity level index, and the three broadcast ionospheric coefficients are determined by a second-polynomial through fitting the Az values estimated from globally distributed Galileo Sensor Stations (GSS. In this study, the processing strategies for the estimation of NeQuick ionospheric coefficients are discussed and the characteristics of the NeQuick coefficients are also analyzed. The accuracy of Global Position System (GPS broadcast Klobuchar, original NeQuick2 and fitted NeQuickC as well as Galileo broadcast NeQuickG models is evaluated over the continental and oceanic regions, respectively, in comparison with the ionospheric total electron content (TEC provided by global ionospheric maps (GIM, GPS test stations and JASON-2 altimeter. The results show that NeQuickG can mitigate ionospheric delay by 54.2%~65.8% on a global scale, and NeQuickC can correct for 71.1%~74.2% of the ionospheric delay. NeQuick2 performs at the same level with NeQuickG, which is a bit better than that of GPS broadcast Klobuchar model.

  5. Exploring parameter constraints on quintessential dark energy: The exponential model

    International Nuclear Information System (INIS)

    Bozek, Brandon; Abrahamse, Augusta; Albrecht, Andreas; Barnard, Michael

    2008-01-01

    We present an analysis of a scalar field model of dark energy with an exponential potential using the Dark Energy Task Force (DETF) simulated data models. Using Markov Chain Monte Carlo sampling techniques we examine the ability of each simulated data set to constrain the parameter space of the exponential potential for data sets based on a cosmological constant and a specific exponential scalar field model. We compare our results with the constraining power calculated by the DETF using their 'w 0 -w a ' parametrization of the dark energy. We find that respective increases in constraining power from one stage to the next produced by our analysis give results consistent with DETF results. To further investigate the potential impact of future experiments, we also generate simulated data for an exponential model background cosmology which cannot be distinguished from a cosmological constant at DETF 'Stage 2', and show that for this cosmology good DETF Stage 4 data would exclude a cosmological constant by better than 3σ

  6. Calibration by Hydrological Response Unit of a National Hydrologic Model to Improve Spatial Representation and Distribution of Parameters

    Science.gov (United States)

    Norton, P. A., II

    2015-12-01

    The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.

  7. Adjusting shape-memory properties of amorphous polyether urethanes and radio-opaque composites thereof by variation of physical parameters during programming

    International Nuclear Information System (INIS)

    Cui, J; Kratz, K; Lendlein, A

    2010-01-01

    Various composites have been prepared to improve the mechanical properties of shape-memory polymers (SMPs) or to incorporate new functionalities (e.g. magneto-sensitivity) in polymer matrices. In this paper, we systematically investigated the influence of the programming temperature T prog and the applied strain ε m as parameters of the shape-memory creation procedure (SMCP) on the shape-memory properties of an amorphous polyether urethane and radio-opaque composites thereof. Recovery under stress-free conditions was quantified by the shape recovery rate R r and the switching temperature T sw , while the maximum recovery stress σ max was determined at the characteristic temperature T σ,max under constant strain conditions. Excellent shape-memory properties were achieved in all experiments with R r values in between 80 and 98%. σ max could be tailored from 0.4 to 3.7 MPa. T sw and T σ,max could be systematically adjusted from 33 to 71 °C by variation of T prog for each investigated sample. The investigated radio-opaque shape-memory composites will form the material basis for mechanically active scaffolds, which could serve as an intelligent substitute for the extracellular matrix to study the influence of mechanical stimulation of tissue development

  8. Ice loading model for Glacial Isostatic Adjustment in the Barents Sea constrained by GRACE gravity observations

    Science.gov (United States)

    Root, Bart; Tarasov, Lev; van der Wal, Wouter

    2014-05-01

    The global ice budget is still under discussion because the observed 120-130 m eustatic sea level equivalent since the Last Glacial Maximum (LGM) can not be explained by the current knowledge of land-ice melt after the LGM. One possible location for the missing ice is the Barents Sea Region, which was completely covered with ice during the LGM. This is deduced from relative sea level observations on Svalbard, Novaya Zemlya and the North coast of Scandinavia. However, there are no observations in the middle of the Barents Sea that capture the post-glacial uplift. With increased precision and longer time series of monthly gravity observations of the GRACE satellite mission it is possible to constrain Glacial Isostatic Adjustment in the center of the Barents Sea. This study investigates the extra constraint provided by GRACE data for modeling the past ice geometry in the Barents Sea. We use CSR release 5 data from February 2003 to July 2013. The GRACE data is corrected for the past 10 years of secular decline of glacier ice on Svalbard, Novaya Zemlya and Frans Joseph Land. With numerical GIA models for a radially symmetric Earth, we model the expected gravity changes and compare these with the GRACE observations after smoothing with a 250 km Gaussian filter. The comparisons show that for the viscosity profile VM5a, ICE-5G has too strong a gravity signal compared to GRACE. The regional calibrated ice sheet model (GLAC) of Tarasov appears to fit the amplitude of the GRACE signal. However, the GRACE data are very sensitive to the ice-melt correction, especially for Novaya Zemlya. Furthermore, the ice mass should be more concentrated to the middle of the Barents Sea. Alternative viscosity models confirm these conclusions.

  9. Direct risk standardisation: a new method for comparing casemix adjusted event rates using complex models.

    Science.gov (United States)

    Nicholl, Jon; Jacques, Richard M; Campbell, Michael J

    2013-10-29

    Comparison of outcomes between populations or centres may be confounded by any casemix differences and standardisation is carried out to avoid this. However, when the casemix adjustment models are large and complex, direct standardisation has been described as "practically impossible", and indirect standardisation may lead to unfair comparisons. We propose a new method of directly standardising for risk rather than standardising for casemix which overcomes these problems. Using a casemix model which is the same model as would be used in indirect standardisation, the risk in individuals is estimated. Risk categories are defined, and event rates in each category for each centre to be compared are calculated. A weighted sum of the risk category specific event rates is then calculated. We have illustrated this method using data on 6 million admissions to 146 hospitals in England in 2007/8 and an existing model with over 5000 casemix combinations, and a second dataset of 18,668 adult emergency admissions to 9 centres in the UK and overseas and a published model with over 20,000 casemix combinations and a continuous covariate. Substantial differences between conventional directly casemix standardised rates and rates from direct risk standardisation (DRS) were found. Results based on DRS were very similar to Standardised Mortality Ratios (SMRs) obtained from indirect standardisation, with similar standard errors. Direct risk standardisation using our proposed method is as straightforward as using conventional direct or indirect standardisation, always enables fair comparisons of performance to be made, can use continuous casemix covariates, and was found in our examples to have similar standard errors to the SMR. It should be preferred when there is a risk that conventional direct or indirect standardisation will lead to unfair comparisons.

  10. Set up of a method for the adjustment of resonance parameters on integral experiments; Mise au point d`une methode d`ajustement des parametres de resonance sur des experiences integrales

    Energy Technology Data Exchange (ETDEWEB)

    Blaise, P.

    1996-12-18

    Resonance parameters for actinides play a significant role in the neutronic characteristics of all reactor types. All the major integral parameters strongly depend on the nuclear data of the isotopes in the resonance-energy regions.The author sets up a method for the adjustment of resonance parameters taking into account the self-shielding effects and restricting the cross section deconvolution problem to a limited energy region. (N.T.).

  11. Location memory for dots in polygons versus cities in regions: evaluating the category adjustment model.

    Science.gov (United States)

    Friedman, Alinda; Montello, Daniel R; Burte, Heather

    2012-09-01

    We conducted 3 experiments to examine the category adjustment model (Huttenlocher, Hedges, & Duncan, 1991) in circumstances in which the category boundaries were irregular schematized polygons made from outlines of maps. For the first time, accuracy was tested when only perceptual and/or existing long-term memory information about identical locations was cued. Participants from Alberta, Canada and California received 1 of 3 conditions: dots-only, in which a dot appeared within the polygon, and after a 4-s dynamic mask the empty polygon appeared and the participant indicated where the dot had been; dots-and-names, in which participants were told that the first polygon represented Alberta/California and that each dot was in the correct location for the city whose name appeared outside the polygon; and names-only, in which there was no first polygon, and participants clicked on the city locations from extant memory alone. Location recall in the dots-only and dots-and-names conditions did not differ from each other and had small but significant directional errors that pointed away from the centroids of the polygons. In contrast, the names-only condition had large and significant directional errors that pointed toward the centroids. Experiments 2 and 3 eliminated the distribution of stimuli and overall screen position as causal factors. The data suggest that in the "classic" category adjustment paradigm, it is difficult to determine a priori when Bayesian cue combination is applicable, making Bayesian analysis less useful as a theoretical approach to location estimation. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  12. Correlation and prediction of osmotic coefficient and water activity of aqueous electrolyte solutions by a two-ionic parameter model

    International Nuclear Information System (INIS)

    Pazuki, G.R.

    2005-01-01

    In this study, osmotic coefficients and water activities in aqueous solutions have been modeled using a new approach based on the Pitzer model. This model contains two physically significant ionic parameters regarding ionic solvation and the closest distance of approach between ions in a solution. The proposed model was evaluated by estimating the osmotic coefficients of nine electrolytes in aqueous solutions. The obtained results showed that the model is suitable for predicting the osmotic coefficients in aqueous electrolyte solutions. Using adjustable parameters, which have been calculated from regression between the experimental osmotic coefficient and the results of this model, the water activity coefficients of aqueous solutions were calculated. The average absolute relative deviations of the osmotic coefficients between the experimental data and the calculated results were in agreement

  13. Application of multi-parameter chorus and plasmaspheric hiss wave models in radiation belt modeling

    Science.gov (United States)

    Aryan, H.; Kang, S. B.; Balikhin, M. A.; Fok, M. C. H.; Agapitov, O. V.; Komar, C. M.; Kanekal, S. G.; Nagai, T.; Sibeck, D. G.

    2017-12-01

    Numerical simulation studies of the Earth's radiation belts are important to understand the acceleration and loss of energetic electrons. The Comprehensive Inner Magnetosphere-Ionosphere (CIMI) model along with many other radiation belt models require inputs for pitch angle, energy, and cross diffusion of electrons, due to chorus and plasmaspheric hiss waves. These parameters are calculated using statistical wave distribution models of chorus and plasmaspheric hiss amplitudes. In this study we incorporate recently developed multi-parameter chorus and plasmaspheric hiss wave models based on geomagnetic index and solar wind parameters. We perform CIMI simulations for two geomagnetic storms and compare the flux enhancement of MeV electrons with data from the Van Allen Probes and Akebono satellites. We show that the relativistic electron fluxes calculated with multi-parameter wave models resembles the observations more accurately than the relativistic electron fluxes calculated with single-parameter wave models. This indicates that wave models based on a combination of geomagnetic index and solar wind parameters are more effective as inputs to radiation belt models.

  14. I. Nuclear and neutron matter calculations with isobars. II. A model calculation of Fermi liquid parameters for liquid 3He

    International Nuclear Information System (INIS)

    Ainsworth, T.L.

    1983-01-01

    The Δ(1232) plays an important role in determining the properties of nuclear and neutron matter. The effects of the Δ resonance are incorporated explicitly by using a coupled channel formalism. A method for constraining a lowest order variational calculation, appropriate when nucleon internal degrees of freedom are made explicity, is presented. Different N-N potentials were calculated and fit to phase shift data and deuteron properties. The potentials were constructed to test the relative importance of the Δ resonance on nuclear properties. The symmetry energy and incompressibility of nuclear matter are generally reproduced by this calculation. Neutron matter results lead to appealing neutron star models. Fermi liquid parameters for 3 He are calculated with a model that includes both direct and induced terms. A convenient form of the direct interaction is obtained in terms of the parameters. The form of the direct interaction ensures that the forward scattering sum rule (Pauli principle) is obeyed. The parameters are adjusted to fit the experimentally determined F 0 /sup s/, F 0 /sup a/, and F 1 /sup s/ Landau parameters. Higher order Landau parameters are calculated by the self-consistent solution of the equations; comparison to experiment is good. The model also leads to a preferred value for the effective mass of 3 He. Of the three parameters only one shows any dependence on pressure. An exact sum rule is derived relating this parameter to a specific summation of Landau parameters

  15. Parameter estimation and hypothesis testing in linear models

    CERN Document Server

    Koch, Karl-Rudolf

    1999-01-01

    The necessity to publish the second edition of this book arose when its third German edition had just been published. This second English edition is there­ fore a translation of the third German edition of Parameter Estimation and Hypothesis Testing in Linear Models, published in 1997. It differs from the first English edition by the addition of a new chapter on robust estimation of parameters and the deletion of the section on discriminant analysis, which has been more completely dealt with by the author in the book Bayesian In­ ference with Geodetic Applications, Springer-Verlag, Berlin Heidelberg New York, 1990. Smaller additions and deletions have been incorporated, to im­ prove the text, to point out new developments or to eliminate errors which became apparent. A few examples have been also added. I thank Springer-Verlag for publishing this second edition and for the assistance in checking the translation, although the responsibility of errors remains with the author. I also want to express my thanks...

  16. Positive Adjustment Among American Repatriated Prisoners of the Vietnam War: Modeling the Long-Term Effects of Captivity.

    Science.gov (United States)

    King, Daniel W; King, Lynda A; Park, Crystal L; Lee, Lewina O; Kaiser, Anica Pless; Spiro, Avron; Moore, Jeffrey L; Kaloupek, Danny G; Keane, Terence M

    2015-11-01

    A longitudinal lifespan model of factors contributing to later-life positive adjustment was tested on 567 American repatriated prisoners from the Vietnam War. This model encompassed demographics at time of capture and attributes assessed after return to the U.S. (reports of torture and mental distress) and approximately 3 decades later (later-life stressors, perceived social support, positive appraisal of military experiences, and positive adjustment). Age and education at time of capture and physical torture were associated with repatriation mental distress, which directly predicted poorer adjustment 30 years later. Physical torture also had a salutary effect, enhancing later-life positive appraisals of military experiences. Later-life events were directly and indirectly (through concerns about retirement) associated with positive adjustment. Results suggest that the personal resources of older age and more education and early-life adverse experiences can have cascading effects over the lifespan to impact well-being in both positive and negative ways.

  17. Coupled 1D-2D hydrodynamic inundation model for sewer overflow: Influence of modeling parameters

    Directory of Open Access Journals (Sweden)

    Adeniyi Ganiyu Adeogun

    2015-10-01

    Full Text Available This paper presents outcome of our investigation on the influence of modeling parameters on 1D-2D hydrodynamic inundation model for sewer overflow, developed through coupling of an existing 1D sewer network model (SWMM and 2D inundation model (BREZO. The 1D-2D hydrodynamic model was developed for the purpose of examining flood incidence due to surcharged water on overland surface. The investigation was carried out by performing sensitivity analysis on the developed model. For the sensitivity analysis, modeling parameters, such as mesh resolution Digital Elevation Model (DEM resolution and roughness were considered. The outcome of the study shows the model is sensitive to changes in these parameters. The performance of the model is significantly influenced, by the Manning's friction value, the DEM resolution and the area of the triangular mesh. Also, changes in the aforementioned modeling parameters influence the Flood characteristics, such as the inundation extent, the flow depth and the velocity across the model domain. Keywords: Inundation, DEM, Sensitivity analysis, Model coupling, Flooding

  18. Parameters-related uncertainty in modeling sugar cane yield with an agro-Land Surface Model

    Science.gov (United States)

    Valade, A.; Ciais, P.; Vuichard, N.; Viovy, N.; Ruget, F.; Gabrielle, B.

    2012-12-01

    Agro-Land Surface Models (agro-LSM) have been developed from the coupling of specific crop models and large-scale generic vegetation models. They aim at accounting for the spatial distribution and variability of energy, water and carbon fluxes within soil-vegetation-atmosphere continuum with a particular emphasis on how crop phenology and agricultural management practice influence the turbulent fluxes exchanged with the atmosphere, and the underlying water and carbon pools. A part of the uncertainty in these models is related to the many parameters included in the models' equations. In this study, we quantify the parameter-based uncertainty in the simulation of sugar cane biomass production with the agro-LSM ORCHIDEE-STICS on a multi-regional approach with data from sites in Australia, La Reunion and Brazil. First, the main source of uncertainty for the output variables NPP, GPP, and sensible heat flux (SH) is determined through a screening of the main parameters of the model on a multi-site basis leading to the selection of a subset of most sensitive parameters causing most of the uncertainty. In a second step, a sensitivity analysis is carried out on the parameters selected from the screening analysis at a regional scale. For this, a Monte-Carlo sampling method associated with the calculation of Partial Ranked Correlation Coefficients is used. First, we quantify the sensitivity of the output variables to individual input parameters on a regional scale for two regions of intensive sugar cane cultivation in Australia and Brazil. Then, we quantify the overall uncertainty in the simulation's outputs propagated from the uncertainty in the input parameters. Seven parameters are identified by the screening procedure as driving most of the uncertainty in the agro-LSM ORCHIDEE-STICS model output at all sites. These parameters control photosynthesis (optimal temperature of photosynthesis, optimal carboxylation rate), radiation interception (extinction coefficient), root

  19. [Construction and validation of a multidimensional model of students' adjustment to college context].

    Science.gov (United States)

    Soares, Ana Paula; Guisande, M Adelina; Diniz, António M; Almeida, Leandro S

    2006-05-01

    This article presents a model of interaction of personal and contextual variables in the prediction of academic performance and psychosocial development of Portuguese college students. The sample consists of 560 first-year college students of the University of Minho. The path analysis results suggest that initial expectations of the students' involvement in academic life constituted an effective predictor of their involvement during their first year; as well as the social climate of the classroom influenced their involvement, well-being and levels of satisfaction obtained. However, these relationships were not strong enough to influence the criterion variables integrated in the model (academic performance and psychosocial development). Academic performance was predicted by the high school grades and college entrance examination scores, and the level of psychosocial development was determined by the level of development showed at the time they entered college. Though more research is needed, these results point to the importance of students' pre-college characteristics when we are considering the quality of their college adjustment process.

  20. Evaluation of the perceptual grouping parameter in the CTVA model

    Directory of Open Access Journals (Sweden)

    Manuel Cortijo

    2005-01-01

    Full Text Available The CODE Theory of Visual Attention (CTVA is a mathematical model explaining the effects of grouping by proximity and distance upon reaction times and accuracy of response with regard to elements in the visual display. The predictions of the theory agree quite acceptably in one and two dimensions (CTVA-2D with the experimental results (reaction times and accuracy of response. The difference between reaction-times for the compatible and incompatible responses, known as the responsecompatibility effect, is also acceptably predicted, except at small distances and high number of distractors. Further results using the same paradigm at even smaller distances have been now obtained, showing greater discrepancies. Then, we have introduced a method to evaluate the strength of sensory evidence (eta parameter, which takes grouping by similarity into account and minimizes these discrepancies.

  1. Experimental parameters differentially affect the humoral response of the cholera-toxin-based murine model of food allergy

    DEFF Research Database (Denmark)

    Kroghsbo, S.; Christensen, Hanne Risager; Frøkiær, Hanne

    2003-01-01

    Background: Recent studies have developed a murine model of IgE-mediated food allergy based on oral coadministration of antigen and cholera toxin (CT) to establish a maximal response for studying immunopathogenic mechanisms and immunotherapeutic strategies. However, for studying subtle...... interested in characterizing the individual effects of the parameters in the CT-based model: CT dose, antigen type and dose, and number of immunizations. Methods: BALB/c mice were orally sensitized weekly for 3 or 7 weeks with graded doses of CT and various food antigens (soy-trypsin inhibitor, ovalbumin...... of the antibody response depended on the type of antigen and number of immunizations. Conclusions: The critical parameters of the CT-based murine allergy model differentially control the intensity and kinetics of the developing immune response. Adjustment of these parameters could be a key tool for tailoring...

  2. Examining the Functional Specification of Two-Parameter Model under Location and Scale Parameter Condition

    OpenAIRE

    Nakashima, Takahiro

    2006-01-01

    The functional specification of mean-standard deviation approach is examined under location and scale parameter condition. Firstly, the full set of restrictions imposed on the mean-standard deviation function under the location and scale parameter condition are made clear. Secondly, the examination based on the restrictions mentioned in the previous sentence derives the new properties of the mean-standard deviation function on the applicability of additive separability and the curvature of ex...

  3. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  4. Modeling the outflow of liquid with initial supercritical parameters using the relaxation model for condensation

    Directory of Open Access Journals (Sweden)

    Lezhnin Sergey

    2017-01-01

    Full Text Available The two-temperature model of the outflow from a vessel with initial supercritical parameters of medium has been realized. The model uses thermodynamic non-equilibrium relaxation approach to describe phase transitions. Based on a new asymptotic model for computing the relaxation time, the outflow of water with supercritical initial pressure and super- and subcritical temperatures has been calculated.

  5. Transient dynamic and modeling parameter sensitivity analysis of 1D solid oxide fuel cell model

    International Nuclear Information System (INIS)

    Huangfu, Yigeng; Gao, Fei; Abbas-Turki, Abdeljalil; Bouquain, David; Miraoui, Abdellatif

    2013-01-01

    Highlights: • A multiphysics, 1D, dynamic SOFC model is developed. • The presented model is validated experimentally in eight different operating conditions. • Electrochemical and thermal dynamic transient time expressions are given in explicit forms. • Parameter sensitivity is discussed for different semi-empirical parameters in the model. - Abstract: In this paper, a multiphysics solid oxide fuel cell (SOFC) dynamic model is developed by using a one dimensional (1D) modeling approach. The dynamic effects of double layer capacitance on the electrochemical domain and the dynamic effect of thermal capacity on thermal domain are thoroughly considered. The 1D approach allows the model to predict the non-uniform distributions of current density, gas pressure and temperature in SOFC during its operation. The developed model has been experimentally validated, under different conditions of temperature and gas pressure. Based on the proposed model, the explicit time constant expressions for different dynamic phenomena in SOFC have been given and discussed in detail. A parameters sensitivity study has also been performed and discussed by using statistical Multi Parameter Sensitivity Analysis (MPSA) method, in order to investigate the impact of parameters on the modeling accuracy

  6. Models of traumatic experiences and children's psychological adjustment: the roles of perceived parenting and the children's own resources and activity.

    Science.gov (United States)

    Punamäki, R L; Qouta, S; el Sarraj, E

    1997-08-01

    The relations between traumatic events, perceived parenting styles, children's resources, political activity, and psychological adjustment were examined among 108 Palestinian boys and girls of 11-12 years of age. The results showed that exposure to traumatic events increased psychological adjustment problems directly and via 2 mediating paths. First, the more traumatic events children had experienced, the more negative parenting they experienced. And, the poorer they perceived parenting, the more they suffered from high neuroticism and low self-esteem. Second, the more traumatic events children had experienced, the more political activity they showed, and the more active they were, the more they suffered from psychological adjustment problems. Good perceived parenting protected children's psychological adjustment by making them less vulnerable in two ways. First, traumatic events decreased their intellectual, creative, and cognitive resources, and a lack of resources predicted many psychological adjustment problems in a model excluding perceived parenting. Second, political activity increased psychological adjustment problems in the same model, but not in the model including good parenting.

  7. Kriging modeling and SPSA adjusting PID with KPWF compensator control of IPMC gripper for mm-sized objects

    Science.gov (United States)

    Chen, Yang; Hao, Lina; Yang, Hui; Gao, Jinhai

    2017-12-01

    Ionic polymer metal composite (IPMC) as a new smart material has been widely concerned in the micromanipulation field. In this paper, a novel two-finger gripper which contains an IPMC actuator and an ultrasensitive force sensor is proposed and fabricated. The IPMC as one finger of the gripper for mm-sized objects can achieve gripping and releasing motion, and the other finger works not only as a support finger but also as a force sensor. Because of the feedback signal of the force sensor, this integrated actuating and sensing gripper can complete gripping miniature objects in millimeter scale. The Kriging model is used to describe nonlinear characteristics of the IPMC for the first time, and then the control scheme called simultaneous perturbation stochastic approximation adjusting a proportion integration differentiation parameter controller with a Kriging predictor wavelet filter compensator is applied to track the gripping force of the gripper. The high precision force tracking in the foam ball manipulation process is obtained on a semi-physical experimental platform, which demonstrates that this gripper for mm-sized objects can work well in manipulation applications.

  8. Sensitivity of numerical dispersion modeling to explosive source parameters

    International Nuclear Information System (INIS)

    Baskett, R.L.; Cederwall, R.T.

    1991-01-01

    The calculation of downwind concentrations from non-traditional sources, such as explosions, provides unique challenges to dispersion models. The US Department of Energy has assigned the Atmospheric Release Advisory Capability (ARAC) at the Lawrence Livermore National Laboratory (LLNL) the task of estimating the impact of accidental radiological releases to the atmosphere anywhere in the world. Our experience includes responses to over 25 incidents in the past 16 years, and about 150 exercises a year. Examples of responses to explosive accidents include the 1980 Titan 2 missile fuel explosion near Damascus, Arkansas and the hydrogen gas explosion in the 1986 Chernobyl nuclear power plant accident. Based on judgment and experience, we frequently estimate the source geometry and the amount of toxic material aerosolized as well as its particle size distribution. To expedite our real-time response, we developed some automated algorithms and default assumptions about several potential sources. It is useful to know how well these algorithms perform against real-world measurements and how sensitive our dispersion model is to the potential range of input values. In this paper we present the algorithms we use to simulate explosive events, compare these methods with limited field data measurements, and analyze their sensitivity to input parameters. 14 refs., 7 figs., 2 tabs

  9. Assessing climate change effects on long-term forest development: adjusting growth, phenology, and seed production in a gap model

    NARCIS (Netherlands)

    Meer, van der P.J.; Jorritsma, I.T.M.; Kramer, K.

    2002-01-01

    The sensitivity of forest development to climate change is assessed using a gap model. Process descriptions in the gap model of growth, phenology, and seed production were adjusted for climate change effects using a detailed process-based growth modeland a regression analysis. Simulation runs over

  10. Physical property parameter set for modeling ICPP aqueous wastes with ASPEN electrolyte NRTL model

    International Nuclear Information System (INIS)

    Schindler, R.E.

    1996-09-01

    The aqueous waste evaporators at the Idaho Chemical Processing Plant (ICPP) are being modeled using ASPEN software. The ASPEN software calculates chemical and vapor-liquid equilibria with activity coefficients calculated using the electrolyte Non-Random Two Liquid (NRTL) model for local excess Gibbs free energies of interactions between ions and molecules in solution. The use of the electrolyte NRTL model requires the determination of empirical parameters for the excess Gibbs free energies of the interactions between species in solution. This report covers the development of a set parameters, from literature data, for the use of the electrolyte NRTL model with the major solutes in the ICPP aqueous wastes

  11. A rainfall disaggregation scheme for sub-hourly time scales: Coupling a Bartlett-Lewis based model with adjusting procedures

    Science.gov (United States)

    Kossieris, Panagiotis; Makropoulos, Christos; Onof, Christian; Koutsoyiannis, Demetris

    2018-01-01

    Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions.

  12. Comparison of Two Foreign Body Retrieval Devices with Adjustable Loops in a Swine Model

    International Nuclear Information System (INIS)

    Konya, Andras

    2006-01-01

    The purpose of the study was to compare two similar foreign body retrieval devices, the Texan TM (TX) and the Texan LONGhorn TM (TX-LG), in a swine model. Both devices feature a ≤30-mm adjustable loop. Capture times and total procedure times for retrieving foreign bodies from the infrarenal aorta, inferior vena cava, and stomach were compared. All attempts with both devices (TX, n = 15; TX-LG, n = 14) were successful. Foreign bodies in the vasculature were captured quickly using both devices (mean ± SD, 88 ± 106 sec for TX vs 67 ± 42 sec for TX-LG) with no significant difference between them. The TX-LG, however, allowed significantly better capture times than the TX in the stomach (p = 0.022), Overall, capture times for the TX-LG were significantly better than for the TX (p = 0.029). There was no significant difference between the total procedure times in any anatomic region. TX-LG performed significantly better than the TX in the stomach and therefore overall. The better torque control and maneuverability of TX-LG resulted in better performance in large anatomic spaces

  13. Recalibrating disease parameters for increasing realism in modeling epidemics in closed settings

    Directory of Open Access Journals (Sweden)

    Livio Bioglio

    2016-11-01

    Full Text Available Abstract Background The homogeneous mixing assumption is widely adopted in epidemic modelling for its parsimony and represents the building block of more complex approaches, including very detailed agent-based models. The latter assume homogeneous mixing within schools, workplaces and households, mostly for the lack of detailed information on human contact behaviour within these settings. The recent data availability on high-resolution face-to-face interactions makes it now possible to assess the goodness of this simplified scheme in reproducing relevant aspects of the infection dynamics. Methods We consider empirical contact networks gathered in different contexts, as well as synthetic data obtained through realistic models of contacts in structured populations. We perform stochastic spreading simulations on these contact networks and in populations of the same size under a homogeneous mixing hypothesis. We adjust the epidemiological parameters of the latter in order to fit the prevalence curve of the contact epidemic model. We quantify the agreement by comparing epidemic peak times, peak values, and epidemic sizes. Results Good approximations of the peak times and peak values are obtained with the homogeneous mixing approach, with a median relative difference smaller than 20 % in all cases investigated. Accuracy in reproducing the peak time depends on the setting under study, while for the peak value it is independent of the setting. Recalibration is found to be linear in the epidemic parameters used in the contact data simulations, showing changes across empirical settings but robustness across groups and population sizes. Conclusions An adequate rescaling of the epidemiological parameters can yield a good agreement between the epidemic curves obtained with a real contact network and a homogeneous mixing approach in a population of the same size. The use of such recalibrated homogeneous mixing approximations would enhance the accuracy and

  14. Recalibrating disease parameters for increasing realism in modeling epidemics in closed settings.

    Science.gov (United States)

    Bioglio, Livio; Génois, Mathieu; Vestergaard, Christian L; Poletto, Chiara; Barrat, Alain; Colizza, Vittoria

    2016-11-14

    The homogeneous mixing assumption is widely adopted in epidemic modelling for its parsimony and represents the building block of more complex approaches, including very detailed agent-based models. The latter assume homogeneous mixing within schools, workplaces and households, mostly for the lack of detailed information on human contact behaviour within these settings. The recent data availability on high-resolution face-to-face interactions makes it now possible to assess the goodness of this simplified scheme in reproducing relevant aspects of the infection dynamics. We consider empirical contact networks gathered in different contexts, as well as synthetic data obtained through realistic models of contacts in structured populations. We perform stochastic spreading simulations on these contact networks and in populations of the same size under a homogeneous mixing hypothesis. We adjust the epidemiological parameters of the latter in order to fit the prevalence curve of the contact epidemic model. We quantify the agreement by comparing epidemic peak times, peak values, and epidemic sizes. Good approximations of the peak times and peak values are obtained with the homogeneous mixing approach, with a median relative difference smaller than 20 % in all cases investigated. Accuracy in reproducing the peak time depends on the setting under study, while for the peak value it is independent of the setting. Recalibration is found to be linear in the epidemic parameters used in the contact data simulations, showing changes across empirical settings but robustness across groups and population sizes. An adequate rescaling of the epidemiological parameters can yield a good agreement between the epidemic curves obtained with a real contact network and a homogeneous mixing approach in a population of the same size. The use of such recalibrated homogeneous mixing approximations would enhance the accuracy and realism of agent-based simulations and limit the intrinsic biases of

  15. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  16. Assigning probability distributions to input parameters of performance assessment models

    International Nuclear Information System (INIS)

    Mishra, Srikanta

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available

  17. Effect of Process Parameter in Laser Cutting of PMMA Sheet and ANFIS Modelling for Online Control

    Directory of Open Access Journals (Sweden)

    Hossain Anamul

    2016-01-01

    Full Text Available Laser beam machining (LBM is a promising and high accuracy machining technology in advanced manufacturing process. In LBM, crucial machining qualities of the end product include heat affected zone, surface roughness, kerf width, thermal stress, taper angle etc. It is essential for industrial applications especially in laser cutting of thermoplastics to acquire output product with minimum kerf width. The kerf width is dependent on laser input parameters such as laser power, cutting speed, standoff distance, assist gas pressure etc. However it is difficult to get a functional relationship due to the high uncertainty among these parameters. Hence, total 81 sets of full factorial experiment were conducted, representing four input parameters with three different levels. The experiments were performed by a continuous wave (CW CO2 laser with the mode structure of TEM01 named Zech laser machine that can provide maximum laser power up to 500 W. The polymethylmethacrylate (PMMA sheet with thickness of 3.0 mm was used for this experiment. Laser power, cutting speed, standoff distance and assist gas pressure were used as input parameters for the output named kerf width. Standoff distance, laser power, cutting speed and assist gas pressure have the dominant effect on kerf width, respectively, although assist gas has some significant effect to remove the harmful gas. ANFIS model has been developed for online control purposes. This research is considered important and helpful for manufacturing engineers in adjusting and decision making of the process parameters in laser manufacturing industry of PMMA thermoplastics with desired minimum kerf width as well as intricate shape design purposes.

  18. [Adjustment of the Andersen's model to the Mexican context: access to prenatal care].

    Science.gov (United States)

    Tamez-González, Silvia; Valle-Arcos, Rosa Irene; Eibenschutz-Hartman, Catalina; Méndez-Ramírez, Ignacio

    2006-01-01

    The aim of this work was to propose an adjustment to the Model of Andersen who answers better to the social inequality of the population in the Mexico City and allows to evaluate the effect of socioeconomic factors in the access to the prenatal care of a sample stratified according to degree of marginalization. The data come from a study of 663 women, randomly selected from a framework sample of 21,421 homes in Mexico City. This work collects information about factors that affect utilization of health services, as well as predisposing factors (age and socioeconomic level), as enabling factors (education, social support, entitlement, pay out of pocket and opinion of health services), and need factors. The sample was ranked according to exclusion variables into three stratums. The data were analyzed through the technique of path analysis. The results indicate that socioeconomic level takes part like predisposed variable for utilization of prenatal care services into three stratums. Otherwise, education and social support were the most important enabling variables for utilization of prenatal care services in the same three groups. In regard to low stratum, the most important enabling variables were education and entitlement. For high stratum the principal enabling variables were pay out of pocket and social support. The medium stratum shows atypical behavior which it was difficult to explain and understand. There was not mediating role with need variable in three models. This indicated absence of equality in all stratums. However, the most correlations in high stratum perhaps indicate less inequitable conditions regarding other stratums.

  19. GEMSFITS: Code package for optimization of geochemical model parameters and inverse modeling

    International Nuclear Information System (INIS)

    Miron, George D.; Kulik, Dmitrii A.; Dmytrieva, Svitlana V.; Wagner, Thomas

    2015-01-01

    Highlights: • Tool for generating consistent parameters against various types of experiments. • Handles a large number of experimental data and parameters (is parallelized). • Has a graphical interface and can perform statistical analysis on the parameters. • Tested on fitting the standard state Gibbs free energies of aqueous Al species. • Example on fitting interaction parameters of mixing models and thermobarometry. - Abstract: GEMSFITS is a new code package for fitting internally consistent input parameters of GEM (Gibbs Energy Minimization) geochemical–thermodynamic models against various types of experimental or geochemical data, and for performing inverse modeling tasks. It consists of the gemsfit2 (parameter optimizer) and gfshell2 (graphical user interface) programs both accessing a NoSQL database, all developed with flexibility, generality, efficiency, and user friendliness in mind. The parameter optimizer gemsfit2 includes the GEMS3K chemical speciation solver ( (http://gems.web.psi.ch/GEMS3K)), which features a comprehensive suite of non-ideal activity- and equation-of-state models of solution phases (aqueous electrolyte, gas and fluid mixtures, solid solutions, (ad)sorption. The gemsfit2 code uses the robust open-source NLopt library for parameter fitting, which provides a selection between several nonlinear optimization algorithms (global, local, gradient-based), and supports large-scale parallelization. The gemsfit2 code can also perform comprehensive statistical analysis of the fitted parameters (basic statistics, sensitivity, Monte Carlo confidence intervals), thus supporting the user with powerful tools for evaluating the quality of the fits and the physical significance of the model parameters. The gfshell2 code provides menu-driven setup of optimization options (data selection, properties to fit and their constraints, measured properties to compare with computed counterparts, and statistics). The practical utility, efficiency, and

  20. Multi-scale modeling of diffusion-controlled reactions in polymers: renormalisation of reactivity parameters.

    Science.gov (United States)

    Everaers, Ralf; Rosa, Angelo

    2012-01-07

    The quantitative description of polymeric systems requires hierarchical modeling schemes, which bridge the gap between the atomic scale, relevant to chemical or biomolecular reactions, and the macromolecular scale, where the longest relaxation modes occur. Here, we use the formalism for diffusion-controlled reactions in polymers developed by Wilemski, Fixman, and Doi to discuss the renormalisation of the reactivity parameters in polymer models with varying spatial resolution. In particular, we show that the adjustments are independent of chain length. As a consequence, it is possible to match reactions times between descriptions with different resolution for relatively short reference chains and to use the coarse-grained model to make quantitative predictions for longer chains. We illustrate our results by a detailed discussion of the classical problem of chain cyclization in the Rouse model, which offers the simplest example of a multi-scale descriptions, if we consider differently discretized Rouse models for the same physical system. Moreover, we are able to explore different combinations of compact and non-compact diffusion in the local and large-scale dynamics by varying the embedding dimension.

  1. Recommended direct simulation Monte Carlo collision model parameters for modeling ionized air transport processes

    Energy Technology Data Exchange (ETDEWEB)

    Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu [Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States)

    2016-02-15

    A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.

  2. Opportunities for Improving Army Modeling and Simulation Development: Making Fundamental Adjustments and Borrowing Commercial Business Practices

    National Research Council Canada - National Science Library

    Lee, John

    2000-01-01

    ...; requirements which span the conflict spectrum. The Army's current staff training simulation development process could better support all possible scenarios by making some fundamental adjustments and borrowing commercial business practices...

  3. Using Multilevel Modeling to Assess Case-Mix Adjusters in Consumer Experience Surveys in Health Care

    NARCIS (Netherlands)

    Damman, Olga C.; Stubbe, Janine H.; Hendriks, Michelle; Arah, Onyebuchi A.; Spreeuwenberg, Peter; Delnoij, Diana M. J.; Groenewegen, Peter P.

    2009-01-01

    Background: Ratings on the quality of healthcare from the consumer's perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for

  4. Effect of the spray volume adjustment model on the efficiency of fungicides and residues in processing tomato

    Energy Technology Data Exchange (ETDEWEB)

    Ratajkiewicz, H.; Kierzek, R.; Raczkowski, M.; Hołodyńska-Kulas, A.; Łacka, A.; Wójtowicz, A.; Wachowiak, M.

    2016-11-01

    This study compared the effects of a proportionate spray volume (PSV) adjustment model and a fixed model (300 L/ha) on the infestation of processing tomato with potato late blight (Phytophthora infestans (Mont.) de Bary) (PLB) and azoxystrobin and chlorothalonil residues in fruits in three consecutive seasons. The fungicides were applied in alternating system with or without two spreader adjuvants. The proportionate spray volume adjustment model was based on the number of leaves on plants and spray volume index. The modified Quick, Easy, Cheap, Effective, Rugged, and Safe (QuEChERS) method was optimized and validated for extraction of azoxystrobin and chlorothalonil residue. Gas chromatography with a nitrogen and phosphorus detector and an electron capture detector were used for the analysis of fungicides. The results showed that higher fungicidal residues were connected with lower infestation of tomato with PLB. PSV adjustment model resulted in lower infestation of tomato than the fixed model (300 L/ha) when fungicides were applied at half the dose without adjuvants. Higher expected spray interception into the tomato canopy with the PSV system was recognized as the reasons of better control of PLB. The spreader adjuvants did not have positive effect on the biological efficacy of spray volume application systems. The results suggest that PSV adjustment model can be used to determine the spray volume for fungicide application for processing tomato crop. (Author)

  5. Effect of the spray volume adjustment model on the efficiency of fungicides and residues in processing tomato

    Directory of Open Access Journals (Sweden)

    Henryk Ratajkiewicz

    2016-08-01

    Full Text Available This study compared the effects of a proportionate spray volume (PSV adjustment model and a fixed model (300 L/ha on the infestation of processing tomato with potato late blight (Phytophthora infestans (Mont. de Bary (PLB and azoxystrobin and chlorothalonil residues in fruits in three consecutive seasons. The fungicides were applied in alternating system with or without two spreader adjuvants. The proportionate spray volume adjustment model was based on the number of leaves on plants and spray volume index. The modified Quick, Easy, Cheap, Effective, Rugged, and Safe (QuEChERS method was optimized and validated for extraction of azoxystrobin and chlorothalonil residue. Gas chromatography with a nitrogen and phosphorus detector and an electron capture detector were used for the analysis of fungicides. The results showed that higher fungicidal residues were connected with lower infestation of tomato with PLB. PSV adjustment model resulted in lower infestation of tomato than the fixed model (300 L/ha when fungicides were applied at half the dose without adjuvants. Higher expected spray interception into the tomato canopy with the PSV system was recognized as the reasons of better control of PLB. The spreader adjuvants did not have positive effect on the biological efficacy of spray volume application systems. The results suggest that PSV adjustment model can be used to determine the spray volume for fungicide application for processing tomato crop.

  6. Model reference adaptive control (MRAC)-based parameter identification applied to surface-mounted permanent magnet synchronous motor

    Science.gov (United States)

    Zhong, Chongquan; Lin, Yaoyao

    2017-11-01

    In this work, a model reference adaptive control-based estimated algorithm is proposed for online multi-parameter identification of surface-mounted permanent magnet synchronous machines. By taking the dq-axis equations of a practical motor as the reference model and the dq-axis estimation equations as the adjustable model, a standard model-reference-adaptive-system-based estimator was established. Additionally, the Popov hyperstability principle was used in the design of the adaptive law to guarantee accurate convergence. In order to reduce the oscillation of identification result, this work introduces a first-order low-pass digital filter to improve precision regarding the parameter estimation. The proposed scheme was then applied to an SPM synchronous motor control system without any additional circuits and implemented using a DSP TMS320LF2812. For analysis, the experimental results reveal the effectiveness of the proposed method.

  7. House thermal model parameter estimation method for Model Predictive Control applications

    NARCIS (Netherlands)

    van Leeuwen, Richard Pieter; de Wit, J.B.; Fink, J.; Smit, Gerardus Johannes Maria

    In this paper we investigate thermal network models with different model orders applied to various Dutch low-energy house types with high and low interior thermal mass and containing floor heating. Parameter estimations are performed by using data from TRNSYS simulations. The paper discusses results

  8. An improved robust model predictive control for linear parameter-varying input-output models

    NARCIS (Netherlands)

    Abbas, H.S.; Hanema, J.; Tóth, R.; Mohammadpour, J.; Meskin, N.

    2018-01-01

    This paper describes a new robust model predictive control (MPC) scheme to control the discrete-time linear parameter-varying input-output models subject to input and output constraints. Closed-loop asymptotic stability is guaranteed by including a quadratic terminal cost and an ellipsoidal terminal

  9. Seven-parameter statistical model for BRDF in the UV band.

    Science.gov (United States)

    Bai, Lu; Wu, Zhensen; Zou, Xiren; Cao, Yunhua

    2012-05-21

    A new semi-empirical seven-parameter BRDF model is developed in the UV band using experimentally measured data. The model is based on the five-parameter model of Wu and the fourteen-parameter model of Renhorn and Boreman. Surface scatter, bulk scatter and retro-reflection scatter are considered. An optimizing modeling method, the artificial immune network genetic algorithm, is used to fit the BRDF measurement data over a wide range of incident angles. The calculation time and accuracy of the five- and seven-parameter models are compared. After fixing the seven parameters, the model can well describe scattering data in the UV band.

  10. Parameter and State Estimator for State Space Models

    Directory of Open Access Journals (Sweden)

    Ruifeng Ding

    2014-01-01

    Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.

  11. Time-varying parameter models for catchments with land use change: the importance of model structure

    Science.gov (United States)

    Pathiraja, Sahani; Anghileri, Daniela; Burlando, Paolo; Sharma, Ashish; Marshall, Lucy; Moradkhani, Hamid

    2018-05-01

    Rapid population and economic growth in Southeast Asia has been accompanied by extensive land use change with consequent impacts on catchment hydrology. Modeling methodologies capable of handling changing land use conditions are therefore becoming ever more important and are receiving increasing attention from hydrologists. A recently developed data-assimilation-based framework that allows model parameters to vary through time in response to signals of change in observations is considered for a medium-sized catchment (2880 km2) in northern Vietnam experiencing substantial but gradual land cover change. We investigate the efficacy of the method as well as the importance of the chosen model structure in ensuring the success of a time-varying parameter method. The method was used with two lumped daily conceptual models (HBV and HyMOD) that gave good-quality streamflow predictions during pre-change conditions. Although both time-varying parameter models gave improved streamflow predictions under changed conditions compared to the time-invariant parameter model, persistent biases for low flows were apparent in the HyMOD case. It was found that HyMOD was not suited to representing the modified baseflow conditions, resulting in extreme and unrealistic time-varying parameter estimates. This work shows that the chosen model can be critical for ensuring the time-varying parameter framework successfully models streamflow under changing land cover conditions. It can also be used to determine whether land cover changes (and not just meteorological factors) contribute to the observed hydrologic changes in retrospective studies where the lack of a paired control catchment precludes such an assessment.

  12. Time-varying parameter models for catchments with land use change: the importance of model structure

    Directory of Open Access Journals (Sweden)

    S. Pathiraja

    2018-05-01

    Full Text Available Rapid population and economic growth in Southeast Asia has been accompanied by extensive land use change with consequent impacts on catchment hydrology. Modeling methodologies capable of handling changing land use conditions are therefore becoming ever more important and are receiving increasing attention from hydrologists. A recently developed data-assimilation-based framework that allows model parameters to vary through time in response to signals of change in observations is considered for a medium-sized catchment (2880 km2 in northern Vietnam experiencing substantial but gradual land cover change. We investigate the efficacy of the method as well as the importance of the chosen model structure in ensuring the success of a time-varying parameter method. The method was used with two lumped daily conceptual models (HBV and HyMOD that gave good-quality streamflow predictions during pre-change conditions. Although both time-varying parameter models gave improved streamflow predictions under changed conditions compared to the time-invariant parameter model, persistent biases for low flows were apparent in the HyMOD case. It was found that HyMOD was not suited to representing the modified baseflow conditions, resulting in extreme and unrealistic time-varying parameter estimates. This work shows that the chosen model can be critical for ensuring the time-varying parameter framework successfully models streamflow under changing land cover conditions. It can also be used to determine whether land cover changes (and not just meteorological factors contribute to the observed hydrologic changes in retrospective studies where the lack of a paired control catchment precludes such an assessment.

  13. Uncertainty of Modal Parameters Estimated by ARMA Models

    DEFF Research Database (Denmark)

    Jensen, Jakob Laigaard; Brincker, Rune; Rytter, Anders

    In this paper the uncertainties of identified modal parameters such as eigenfrequencies and damping ratios are assessed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the param...

  14. On the relationship between input parameters in two-mass vocal-fold model with acoustical coupling an signal parameters of the glottal flow

    NARCIS (Netherlands)

    van Hirtum, Annemie; Lopez, Ines; Hirschberg, Abraham; Pelorson, Xavier

    2003-01-01

    In this paper the sensitivity of the two-mass model with acoustical coupling to the model input-parameters is assessed. The model-output or the glottal volume air flow is characterised by signal-parameters in the time-domain. The influence of changing input-parameters on the signal-parameters is

  15. On the relationship between input parameters in the two-mass vocal-fold model with acoustical coupling and signal parameters of the glottal flow

    NARCIS (Netherlands)

    Hirtum, van A.; Lopez Arteaga, I.; Hirschberg, A.; Pelorson, X.

    2003-01-01

    In this paper the sensitivity of the two-mass model with acoustical coupling to the model input-parameters is assessed. The model-output or the glottal volume air flow is characterised by signal-parameters in the time-domain. The influence of changing input-parameters on the signal-parameters is

  16. A Note on the Item Information Function of the Four-Parameter Logistic Model

    Science.gov (United States)

    Magis, David

    2013-01-01

    This article focuses on four-parameter logistic (4PL) model as an extension of the usual three-parameter logistic (3PL) model with an upper asymptote possibly different from 1. For a given item with fixed item parameters, Lord derived the value of the latent ability level that maximizes the item information function under the 3PL model. The…

  17. Data Assimilation and Adjusted Spherical Harmonic Model of VTEC Map over Thailand

    Science.gov (United States)

    Klinngam, Somjai; Maruyama, Takashi; Tsugawa, Takuya; Ishii, Mamoru; Supnithi, Pornchai; Chiablaem, Athiwat

    2016-07-01

    The global navigation satellite system (GNSS) and high frequency (HF) communication are vulnerable to the ionospheric irregularities, especially when the signal travels through the low-latitude region and around the magnetic equator known as equatorial ionization anomaly (EIA) region. In order to study the ionospheric effects to the communications performance in this region, the regional map of the observed total electron content (TEC) can show the characteristic and irregularities of the ionosphere. In this work, we develop the two-dimensional (2D) map of vertical TEC (VTEC) over Thailand using the adjusted spherical harmonic model (ASHM) and the data assimilation technique. We calculate the VTEC from the receiver independent exchange (RINEX) files recorded by the dual-frequency global positioning system (GPS) receivers on July 8th, 2012 (quiet day) at 12 stations around Thailand: 0° to 25°E and 95°N to 110°N. These stations are managed by Department of Public Works and Town & Country Planning (DPT), Thailand, and the South East Asia Low-latitude ionospheric Network (SEALION) project operated by National Institute of Information and Communications Technology (NICT), Japan, and King Mongkut's Institute of Technology Ladkrabang (KMITL). We compute the median observed VTEC (OBS-VTEC) in the grids with the spatial resolution of 2.5°x5° in latitude and longitude and time resolution of 2 hours. We assimilate the OBS-VTEC with the estimated VTEC from the International Reference Ionosphere model (IRI-VTEC) as well as the ionosphere map exchange (IONEX) files provided by the International GNSS Service (IGS-VTEC). The results show that the estimation of the 15-degree ASHM can be improved when both of IRI-VTEC and IGS-VTEC are weighted by the latitude-dependent factors before assimilating with the OBS-VTEC. However, the IRI-VTEC assimilation can improve the ASHM estimation more than the IGS-VTEC assimilation. Acknowledgment: This work is partially funded by the

  18. Adjustment of regional regression models of urban-runoff quality using data for Chattanooga, Knoxville, and Nashville, Tennessee

    Science.gov (United States)

    Hoos, Anne B.; Patel, Anant R.

    1996-01-01

    Model-adjustment procedures were applied to the combined data bases of storm-runoff quality for Chattanooga, Knoxville, and Nashville, Tennessee, to improve predictive accuracy for storm-runoff quality for urban watersheds in these three cities and throughout Middle and East Tennessee. Data for 45 storms at 15 different sites (five sites in each city) constitute the data base. Comparison of observed values of storm-runoff load and event-mean concentration to the predicted values from the regional regression models for 10 constituents shows prediction errors, as large as 806,000 percent. Model-adjustment procedures, which combine the regional model predictions with local data, are applied to improve predictive accuracy. Standard error of estimate after model adjustment ranges from 67 to 322 percent. Calibration results may be biased due to sampling error in the Tennessee data base. The relatively large values of standard error of estimate for some of the constituent models, although representing significant reduction (at least 50 percent) in prediction error compared to estimation with unadjusted regional models, may be unacceptable for some applications. The user may wish to collect additional local data for these constituents and repeat the analysis, or calibrate an independent local regression model.

  19. Good Models Gone Bad: Quantifying and Predicting Parameter-Induced Climate Model Simulation Failures

    Science.gov (United States)

    Lucas, D. D.; Klein, R.; Tannahill, J.; Brandon, S.; Covey, C. C.; Domyancic, D.; Ivanova, D. P.

    2012-12-01

    Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Statistical analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation failures of the Parallel Ocean Program (POP2). About 8.5% of our POP2 runs failed for numerical reasons at certain combinations of parameter values. We apply support vector machine (SVM) classification from the fields of pattern recognition and machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. The SVM classifiers readily predict POP2 failures in an independent validation ensemble, and are subsequently used to determine the causes of the failures via a global sensitivity analysis. Four parameters related to ocean mixing and viscosity are identified as the major sources of POP2 failures. Our method can be used to improve the robustness of complex scientific models to parameter perturbations and to better steer UQ ensembles. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013 (UCRL LLNL-ABS-569112).

  20. Development and Validation of Perioperative Risk-Adjustment Models for Hip Fracture Repair, Total Hip Arthroplasty, and Total Knee Arthroplasty.

    Science.gov (United States)

    Schilling, Peter L; Bozic, Kevin J

    2016-01-06

    Comparing outcomes across providers requires risk-adjustment models that account for differences in case mix. The burden of data collection from the clinical record can make risk-adjusted outcomes difficult to measure. The purpose of this study was to develop risk-adjustment models for hip fracture repair (HFR), total hip arthroplasty (THA), and total knee arthroplasty (TKA) that weigh adequacy of risk adjustment against data-collection burden. We used data from the American College of Surgeons National Surgical Quality Improvement Program to create derivation cohorts for HFR (n = 7000), THA (n = 17,336), and TKA (n = 28,661). We developed logistic regression models for each procedure using age, sex, American Society of Anesthesiologists (ASA) physical status classification, comorbidities, laboratory values, and vital signs-based comorbidities as covariates, and validated the models with use of data from 2012. The derivation models' C-statistics for mortality were 80%, 81%, 75%, and 92% and for adverse events were 68%, 68%, 60%, and 70% for HFR, THA, TKA, and combined procedure cohorts. Age, sex, and ASA classification accounted for a large share of the explained variation in mortality (50%, 58%, 70%, and 67%) and adverse events (43%, 45%, 46%, and 68%). For THA and TKA, these three variables were nearly as predictive as models utilizing all covariates. HFR model discrimination improved with the addition of comorbidities and laboratory values; among the important covariates were functional status, low albumin, high creatinine, disseminated cancer, dyspnea, and body mass index. Model performance was similar in validation cohorts. Risk-adjustment models using data from health records demonstrated good discrimination and calibration for HFR, THA, and TKA. It is possible to provide adequate risk adjustment using only the most predictive variables commonly available within the clinical record. This finding helps to inform the trade-off between model performance and data

  1. Adolescent Sibling Relationship Quality and Adjustment: Sibling Trustworthiness and Modeling, as Factors Directly and Indirectly Influencing These Associations

    Science.gov (United States)

    Gamble, Wendy C.; Yu, Jeong Jin; Kuehn, Emily D.

    2011-01-01

    The main goal of this study was to examine the direct and moderating effects of trustworthiness and modeling on adolescent siblings' adjustment. Data were collected from 438 families including a mother, a younger sibling in fifth, sixth, or seventh grade (M = 11.6 years), and an older sibling (M = 14.3 years). Respondents completed Web-based…

  2. MODEL JARINGAN SYARAF TIRUAN UNTUK MEMPREDIKSI PARAMETER KUALITAS TOMAT BERDASARKAN PARAMETER WARNA RGB (An artificial neural network model for predicting tomato quality parameters based on color

    Directory of Open Access Journals (Sweden)

    Rudiati Evi Masithoh

    2013-03-01

    Full Text Available Artificial neural networks (ANN was used to predict the quality parameters of tomato, i.e. Brix, citric acid, total carotene, and vitamin C. ANN was developed from Red Green Blue (RGB image data of tomatoes measured using a developed computer vision system (CVS. Qualitative analysis of tomato compositions were obtained from laboratory experiments. ANN model was based on a feedforward backpropagation network with different training functions, namely gradient descent (traingd, gradient descent with the resilient backpropagation (trainrp, Broyden, Fletcher, Goldfrab and Shanno (BFGS quasi-Newton (trainbfg, as well as Levenberg Marquardt (trainlm.  The network structure using logsig and linear (purelin activation function at the hidden and output layer, respectively, and using  the trainlm as a training function resulted in the best performance. Correlation coefficient (r of training and validation process were 0.97 - 0.99 and 0.92 - 0.99, whereas the MAE values ​​ranged from 0.01 to 0.23 and 0.03 to 0.59, respectively. Keywords: Artificial neural network, trainlm, tomato, RGB   Jaringan syaraf tiruan (JST digunakan untuk memprediksi parameter kualitas tomat, yaitu Brix, asam sitrat, karoten total, dan vitamin C. JST dikembangkan dari data Red Green Blue (RGB  citra tomat yang diukur menggunakan computer vision system. Data kualitas tomat diperoleh dari analisis di laboratorium. Struktur model JST didasarkan pada jaringan feedforward backpropagation dengan berbagai fungsi pelatihan, yaitu gradient descent (traingd, gradient descent dengan resilient backpropagation (trainrp, Broyden, Fletcher, Goldfrab dan Shanno (BFGS quasi-Newton (trainbfg, serta Levenberg Marquardt (trainlm. Fungsi pelatihan yang terbaik adalah menggunakan trainlm, serta pada struktur jaringan digunakan fungsi aktivasi logsig pada lapisan tersembunyi dan linier (purelin pada lapisan keluaran. dengan 1000 epoch. Nilai koefisien korelasi (r pada tahap pelatihan dan validasi

  3. Connecting Global to Local Parameters in Barred Galaxy Models

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Key words. Galaxies: barred—orbits—global and local parameters. .... series near the stable Lagrange point L1, which coincides with the origin. Doing so, .... Toomre, A. 1981, In: The Structure and Evolution of Normal Galaxies, (eds) S. M. Fall,.

  4. Checking the new IRI model: The bottomside B parameters

    International Nuclear Information System (INIS)

    Mosert, M.; Buresova, D.; Miro, G.; Lazo, B.; Ezquer, R.

    2003-01-01

    Electron density profiles obtained at Pruhonice (50.0, 15.0), El Arenosillo (37.1, 353.2) and Havana (23, 278) were used to check the bottom-side B parameters BO (thickness parameter) and B1 (shape parameter) predicted by the new IRI - 2000 version. The electron density profiles were derived from ionograms using the ARP technique. The data base includes daytime and nighttime ionograms recorded under different seasonal and solar activity conditions. Comparisons with IRI predictions were also done. The analysis shows that: a) The parameter B1 given by IRI 2000 reproduces better the observed ARP values than the IRI-90 version and b) The observed BO values are in general well reproduced by both IRI versions: IRI-90 and IRI-2000. (author)

  5. Checking the new IRI model The bottomside B parameters

    CERN Document Server

    Mosert, M; Ezquer, R; Lazo, B; Miro, G

    2002-01-01

    Electron density profiles obtained at Pruhonice (50.0, 15.0), El Arenosillo (37.1, 353.2) and Havana (23, 278) were used to check the bottom-side B parameters BO (thickness parameter) and B1 (shape parameter) predicted by the new IRI - 2000 version. The electron density profiles were derived from ionograms using the ARP technique. The data base includes daytime and nighttime ionograms recorded under different seasonal and solar activity conditions. Comparisons with IRI predictions were also done. The analysis shows that: a) The parameter B1 given by IRI 2000 reproduces better the observed ARP values than the IRI-90 version and b) The observed BO values are in general well reproduced by both IRI versions: IRI-90 and IRI-2000.

  6. Amundsen Sea simulation with optimized ocean, sea ice, and thermodynamic ice shelf model parameters

    Science.gov (United States)

    Nakayama, Y.; Menemenlis, D.; Schodlok, M.; Heimbach, P.; Nguyen, A. T.; Rignot, E. J.

    2016-12-01

    Ice shelves and glaciers of the West Antarctic Ice Sheet are thinning and melting rapidly in the Amundsen Sea (AS). This is thought to be caused by warm Circumpolar Deep Water (CDW) that intrudes via submarine glacial troughs located at the continental shelf break. Recent studies, however, point out that the depth of thermocline, or thickness of Winter Water (WW, potential temperature below -1 °C located above CDW) is critical in determining the melt rate, especially for the Pine Island Glacier (PIG). For example, the basal melt rate of PIG, which decreased by 50% during summer 2012, has been attributed to thickening of WW. Despite the possible importance of WW thickness on ice shelf melting, previous modeling studies in this region have focused primarily on CDW intrusion and have evaluated numerical simulations based on bottom or deep CDW properties. As a result, none of these models have shown a good representation of WW for the AS. In this study, we adjust a small number of model parameters in a regional Amundsen and Bellingshausen Seas configuration of the Massachusetts Institute of Technology general circulation model (MITgcm) to better fit the available observations during the 2007-2010 period. We choose this time period because summer observations during these years show small interannual variability in the eastern AS. As a result of adjustments, our model shows significantly better match with observations than previous modeling studies, especially for WW. Since density of sea water depends largely on salinity at low temperature, this is crucial for assessing the impact of WW on PIG melt rate. In addition, we conduct several sensitivity studies, showing the impact of surface heat loss on the thickness and properties of WW. We also discuss some preliminary results pertaining to further optimization using the adjoint method. Our work is a first step toward improved representation of ice-shelf ocean interactions in the ECCO (Estimating the Circulation and

  7. Modelling of nonhomogeneous atmosphere in NPP containment using lumped-parameter model based on CFD calculations

    International Nuclear Information System (INIS)

    Ivo, Kljenak; Miroslav, Babic; Borut, Mavko

    2007-01-01

    The possibility of simulating adequately the flow circulation within a nuclear power plant containment using a lumped-parameter code is considered. An experiment on atmosphere mixing and stratification, which was performed in the containment experimental facility TOSQAN at IRSN (Institute of Radioprotection and Nuclear Safety) in Saclay (France), was simulated with the CFD (Computational Fluid Dynamics) code CFX4 and the lumped-parameter code CONTAIN. During some phases of the experiment, steady states were achieved by keeping the boundary conditions constant. Two steady states during which natural convection was the dominant gas flow mechanism were simulated independently. The nodalization of the lumped-parameter model was based on the flow pattern, simulated with the CFD code. The simulation with the lumped-parameter code predicted basically the same flow circulation patterns within the experimental vessel as the simulation with the CFD code did. (authors)

  8. Convexity Adjustments

    DEFF Research Database (Denmark)

    M. Gaspar, Raquel; Murgoci, Agatha

    2010-01-01

    A convexity adjustment (or convexity correction) in fixed income markets arises when one uses prices of standard (plain vanilla) products plus an adjustment to price nonstandard products. We explain the basic and appealing idea behind the use of convexity adjustments and focus on the situations...

  9. A Key Challenge in Global HRM: Adding New Insights to Existing Expatriate Spouse Adjustment Models

    Science.gov (United States)

    Gupta, Ritu; Banerjee, Pratyush; Gaur, Jighyasu

    2012-01-01

    This study is an attempt to strengthen the existing knowledge about factors affecting the adjustment process of the trailing expatriate spouse and the subsequent impact of any maladjustment or expatriate failure. We conducted a qualitative enquiry using grounded theory methodology with 26 Indian spouses who had to deal with their partner's…

  10. Using multilevel modelling to assess case-mix adjusters in consumers experience surveys in health care

    NARCIS (Netherlands)

    Damman, O.C.; Stubbe, J.H.; Hendriks, M.; Arah, O.A.; Spreeuwenberg, P.; Delnoij, D.M.J.; Groenewegen, P.P.

    2009-01-01

    Background: Ratings on the quality of healthcare from the consumer’s perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for

  11. Using multilevel modeling to assess case-mix adjusters in consumer experience surveys in health care.

    NARCIS (Netherlands)

    Damman, O.C.; Stubbe, J.H.; Hendriks, M.; Arah, O.A.; Spreeuwenberg, P.; Delnoij, D.M.J.; Groenewegen, P.P.

    2009-01-01

    Background: Ratings on the quality of healthcare from the consumer’s perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for

  12. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    Science.gov (United States)

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.

  13. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design-Part I. Model development

    Energy Technology Data Exchange (ETDEWEB)

    He, L., E-mail: li.he@ryerson.ca [Department of Civil Engineering, Faculty of Engineering, Architecture and Science, Ryerson University, 350 Victoria Street, Toronto, Ontario, M5B 2K3 (Canada); Huang, G.H. [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan, S4S 0A2 (Canada); College of Urban Environmental Sciences, Peking University, Beijing 100871 (China); Lu, H.W. [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan, S4S 0A2 (Canada)

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the 'true' ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes.

  14. Parameter sensitivity and uncertainty analysis for a storm surge and wave model

    Directory of Open Access Journals (Sweden)

    L. A. Bastidas

    2016-09-01

    Full Text Available Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991 utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland. The sensitive model parameters (of 11 total considered include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters, and depth-induced breaking αB and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large number of interactions between parameters and a nonlinear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.

  15. New trends in parameter identification for mathematical models

    CERN Document Server

    Leitão, Antonio; Zubelli, Jorge

    2018-01-01

    The Proceedings volume contains 16 contributions to the IMPA conference “New Trends in Parameter Identification for Mathematical Models”, Rio de Janeiro, Oct 30 – Nov 3, 2017, integrating the “Chemnitz Symposium on Inverse Problems on Tour”.  This conference is part of the “Thematic Program on Parameter Identification in Mathematical Models” organized  at IMPA in October and November 2017. One goal is to foster the scientific collaboration between mathematicians and engineers from the Brazialian, European and Asian communities. Main topics are iterative and variational regularization methods in Hilbert and Banach spaces for the stable approximate solution of ill-posed inverse problems, novel methods for parameter identification in partial differential equations, problems of tomography ,  solution of coupled conduction-radiation problems at high temperatures, and the statistical solution of inverse problems with applications in physics.

  16. Ajuste de parâmetros de transporte de solutos no solo utilizando Matlab 6.5 Adjustment of soil solute transport parameters with Matlab 6.5

    Directory of Open Access Journals (Sweden)

    Anderson L. de Souza

    2011-12-01

    Full Text Available O sucesso na utilização de modelos matemáticos no estudo do transporte de íons no solo está intimamente ligado à precisão com que os parâmetros de transporte envolvidos neste processo são estabelecidos. De maneira geral, tais parâmetros são determinados mediante a resolução de um problema de otimização não linear em que os dados experimentais, obtidos em ensaios de deslocamento miscível, são ajustados a um modelo teórico. Neste sentido, a utilização de softwares de alta performance no ajuste destes parâmetros mostra-se vantajosa, uma vez que, além da consistência e da disponibilidade de ferramentas numéricas preexistentes, possibilita a incorporação de novas rotinas de acordo com o fenômeno que se queira simular. Sendo assim, este trabalho teve como objetivo desenvolver, em ambiente MATLAB 6.5, uma rotina computacional para a otimização dos seguintes parâmetros de transporte: fator de retardamento (R e coeficiente de dispersão (D. A rotina desenvolvida foi aplicada a dados experimentais de três ensaios de deslocamento miscível do íon potássio em colunas preenchidas com um Latossolo Vermelho- -Amarelo, fase arenosa. A qualidade dos ajustes obtidos foi avaliada utilizando-se do coeficiente de exatidão. Concluiu-se que a rotina proposta apresentou ótimo desempenho, o que, além de reforçar a consistência do método numérico utilizado, indica que a rotina proposta neste trabalho pode contribuir com o avanço dos estudos teóricos da dinâmica da água e de solutos em meios porosos não saturados.The successful use of mathematical models in studies of solute transport in soil is dependent on the accuracy to which the transport parameters involved in this process can be specified. In general, these parameters are determined by using nonlinear optimization techniques to fit theoretical models to experimental data obtained in miscible displacement experiments. The use, therefore, of high-performance software to

  17. Temporal variation and scaling of parameters for a monthly hydrologic model

    Science.gov (United States)

    Deng, Chao; Liu, Pan; Wang, Dingbao; Wang, Weiguang

    2018-03-01

    The temporal variation of model parameters is affected by the catchment conditions and has a significant impact on hydrological simulation. This study aims to evaluate the seasonality and downscaling of model parameter across time scales based on monthly and mean annual water balance models with a common model framework. Two parameters of the monthly model, i.e., k and m, are assumed to be time-variant at different months. Based on the hydrological data set from 121 MOPEX catchments in the United States, we firstly analyzed the correlation between parameters (k and m) and catchment properties (NDVI and frequency of rainfall events, α). The results show that parameter k is positively correlated with NDVI or α, while the correlation is opposite for parameter m, indicating that precipitation and vegetation affect monthly water balance by controlling temporal variation of parameters k and m. The multiple linear regression is then used to fit the relationship between ε and the means and coefficient of variations of parameters k and m. Based on the empirical equation and the correlations between the time-variant parameters and NDVI, the mean annual parameter ε is downscaled to monthly k and m. The results show that it has lower NSEs than these from model with time-variant k and m being calibrated through SCE-UA, while for several study catchments, it has higher NSEs than that of the model with constant parameters. The proposed method is feasible and provides a useful tool for temporal scaling of model parameter.

  18. Zener Diode Compact Model Parameter Extraction Using Xyce-Dakota Optimization.

    Energy Technology Data Exchange (ETDEWEB)

    Buchheit, Thomas E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wilcox, Ian Zachary [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sandoval, Andrew J [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Reza, Shahed [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-12-01

    This report presents a detailed process for compact model parameter extraction for DC circuit Zener diodes. Following the traditional approach of Zener diode parameter extraction, circuit model representation is defined and then used to capture the different operational regions of a real diode's electrical behavior. The circuit model contains 9 parameters represented by resistors and characteristic diodes as circuit model elements. The process of initial parameter extraction, the identification of parameter values for the circuit model elements, is presented in a way that isolates the dependencies between certain electrical parameters and highlights both the empirical nature of the extraction and portions of the real diode physical behavior which of the parameters are intended to represent. Optimization of the parameters, a necessary part of a robost parameter extraction process, is demonstrated using a 'Xyce-Dakota' workflow, discussed in more detail in the report. Among other realizations during this systematic approach of electrical model parameter extraction, non-physical solutions are possible and can be difficult to avoid because of the interdependencies between the different parameters. The process steps described are fairly general and can be leveraged for other types of semiconductor device model extractions. Also included in the report are recommendations for experiment setups for generating optimum dataset for model extraction and the Parameter Identification and Ranking Table (PIRT) for Zener diodes.

  19. Importance of hydrological parameters in contaminant transport modeling in a terrestrial environment

    International Nuclear Information System (INIS)

    Tsuduki, Katsunori; Matsunaga, Takeshi

    2007-01-01

    A grid type multi-layered distributed parameter model for calculating discharge in a watershed was described. Model verification with our field observation resulted in different sets of hydrological parameter values, all of which reproduced the observed discharge. The effect of those varied hydrological parameters on contaminant transport calculation was examined and discussed by simulation of event water transfer. (author)

  20. MODELING OF THE HEAT PUMP STATION ADJUSTABLE LOOP OF AN INTERMEDIATE HEAT-TRANSFER AGENT (Part I

    Directory of Open Access Journals (Sweden)

    Sit B.

    2009-08-01

    Full Text Available There are examined equations of dynamics and statics of an adjustable intermediate loop of heat pump carbon dioxide station in this paper. Heat pump station is a part of the combined heat supply system. Control of transferred thermal capacity from the source of low potential heat source is realized by means of changing the speed of circulation of a liquid in the loop and changing the area of a heat-transmitting surface, both in the evaporator, and in the intermediate heat exchanger depending on the operating parameter, for example, external air temperature and wind speed.