WorldWideScience

Sample records for model parameter adjustments

  1. Adjustments of the TaD electron density reconstruction model with GNSS-TEC parameters for operational application purposes

    Directory of Open Access Journals (Sweden)

    Belehaki Anna

    2012-12-01

    Full Text Available Validation results on the latest version of TaD model (TaDv2 show realistic reconstruction of the electron density profiles (EDPs with an average error of 3 TECU, similar to the error obtained from GNSS-TEC calculated paremeters. The work presented here has the aim to further improve the accuracy of the TaD topside reconstruction, adjusting the TEC parameter calculated from TaD model with the TEC parameter calculated by GNSS transmitting RINEX files provided by receivers co-located with the Digisondes. The performance of the new version is tested during a storm period demonstrating further improvements in respect to the previous version. Statistical comparison of modeled and observed TEC confirms the validity of the proposed adjustment. A significant benefit of the proposed upgrade is that it facilitates the real-time implementation of TaD. The model needs a reliable measure of the scale height at the peak height, which is supposed to be provided by Digisondes. Oftenly, the automatic scaling software fails to correctly calculate the scale height at the peak, Hm, due to interferences in the receiving signal. Consequently the model estimated topside scale height is wrongly calculated leading to unrealistic results for the modeled EDP. The proposed TEC adjustment forces the model to correctly reproduce the topside scale height, despite the inaccurate values of Hm. This adjustment is very important for the application of TaD in an operational environment.

  2. Mistral project: identification and parameter adjustment. Theoretical part; Projet Mistral: identification et recalage des modeles. Etude theorique

    Energy Technology Data Exchange (ETDEWEB)

    Faille, D.; Codrons, B.; Gevers, M.

    1996-03-01

    This document belongs to the methodological part of the project MISTRAL, which builds a library of power plant models. The model equations are generally obtained from the first principles. The parameters are actually not always easily calculable (at least accurately) from the dimension data. We are therefore investigating the possibility of automatically adjusting the value of those parameters from experimental data. To do that, we must master the optimization algorithms and the techniques that are analyzing the model structure, like the identifiability theory. (authors). 7 refs., 1 fig., 1 append.

  3. Modeling and simulation of M/M/c queuing pharmacy system with adjustable parameters

    Science.gov (United States)

    Rashida, A. R.; Fadzli, Mohammad; Ibrahim, Safwati; Goh, Siti Rohana

    2016-02-01

    This paper studies a discrete event simulation (DES) as a computer based modelling that imitates a real system of pharmacy unit. M/M/c queuing theo is used to model and analyse the characteristic of queuing system at the pharmacy unit of Hospital Tuanku Fauziah, Kangar in Perlis, Malaysia. The input of this model is based on statistical data collected for 20 working days in June 2014. Currently, patient waiting time of pharmacy unit is more than 15 minutes. The actual operation of the pharmacy unit is a mixed queuing server with M/M/2 queuing model where the pharmacist is referred as the server parameters. DES approach and ProModel simulation software is used to simulate the queuing model and to propose the improvement for queuing system at this pharmacy system. Waiting time for each server is analysed and found out that Counter 3 and 4 has the highest waiting time which is 16.98 and 16.73 minutes. Three scenarios; M/M/3, M/M/4 and M/M/5 are simulated and waiting time for actual queuing model and experimental queuing model are compared. The simulation results show that by adding the server (pharmacist), it will reduce patient waiting time to a reasonable improvement. Almost 50% average patient waiting time is reduced when one pharmacist is added to the counter. However, it is not necessary to fully utilize all counters because eventhough M/M/4 and M/M/5 produced more reduction in patient waiting time, but it is ineffective since Counter 5 is rarely used.

  4. Adjustment Criterion and Algorithm in Adjustment Model with Uncertain

    Directory of Open Access Journals (Sweden)

    SONG Yingchun

    2015-02-01

    Full Text Available Uncertainty often exists in the process of obtaining measurement data, which affects the reliability of parameter estimation. This paper establishes a new adjustment model in which uncertainty is incorporated into the function model as a parameter. A new adjustment criterion and its iterative algorithm are given based on uncertainty propagation law in the residual error, in which the maximum possible uncertainty is minimized. This paper also analyzes, with examples, the different adjustment criteria and features of optimal solutions about the least-squares adjustment, the uncertainty adjustment and total least-squares adjustment. Existing error theory is extended with new observational data processing method about uncertainty.

  5. Enhancing Global Land Surface Hydrology Estimates from the NASA MERRA Reanalysis Using Precipitation Observations and Model Parameter Adjustments

    Science.gov (United States)

    Reichle, Rolf; Koster, Randal; DeLannoy, Gabrielle; Forman, Barton; Liu, Qing; Mahanama, Sarith; Toure, Ally

    2011-01-01

    The Modern-Era Retrospective analysis for Research and Applications (MERRA) is a state-of-the-art reanalysis that provides. in addition to atmospheric fields. global estimates of soil moisture, latent heat flux. snow. and runoff for J 979-present. This study introduces a supplemental and improved set of land surface hydrological fields ('MERRA-Land') generated by replaying a revised version of the land component of the MERRA system. Specifically. the MERRA-Land estimates benefit from corrections to the precipitation forcing with the Global Precipitation Climatology Project pentad product (version 2.1) and from revised parameters in the rainfall interception model, changes that effectively correct for known limitations in the MERRA land surface meteorological forcings. The skill (defined as the correlation coefficient of the anomaly time series) in land surface hydrological fields from MERRA and MERRA-Land is assessed here against observations and compared to the skill of the state-of-the-art ERA-Interim reanalysis. MERRA-Land and ERA-Interim root zone soil moisture skills (against in situ observations at 85 US stations) are comparable and significantly greater than that of MERRA. Throughout the northern hemisphere, MERRA and MERRA-Land agree reasonably well with in situ snow depth measurements (from 583 stations) and with snow water equivalent from an independent analysis. Runoff skill (against naturalized stream flow observations from 15 basins in the western US) of MERRA and MERRA-Land is typically higher than that of ERA-Interim. With a few exceptions. the MERRA-Land data appear more accurate than the original MERRA estimates and are thus recommended for those interested in using '\\-tERRA output for land surface hydrological studies.

  6. Enhancing Global Land Surface Hydrology Estimates from the NASA MERRA Reanalysis Using Precipitation Observations and Model Parameter Adjustments

    Science.gov (United States)

    Reichle, Rolf; Koster, Randal; DeLannoy, Gabrielle; Forman, Barton; Liu, Qing; Mahanama, Sarith; Toure, Ally

    2011-01-01

    The Modern-Era Retrospective analysis for Research and Applications (MERRA) is a state-of-the-art reanalysis that provides. in addition to atmospheric fields. global estimates of soil moisture, latent heat flux. snow. and runoff for J 979-present. This study introduces a supplemental and improved set of land surface hydrological fields ('MERRA-Land') generated by replaying a revised version of the land component of the MERRA system. Specifically. the MERRA-Land estimates benefit from corrections to the precipitation forcing with the Global Precipitation Climatology Project pentad product (version 2.1) and from revised parameters in the rainfall interception model, changes that effectively correct for known limitations in the MERRA land surface meteorological forcings. The skill (defined as the correlation coefficient of the anomaly time series) in land surface hydrological fields from MERRA and MERRA-Land is assessed here against observations and compared to the skill of the state-of-the-art ERA-Interim reanalysis. MERRA-Land and ERA-Interim root zone soil moisture skills (against in situ observations at 85 US stations) are comparable and significantly greater than that of MERRA. Throughout the northern hemisphere, MERRA and MERRA-Land agree reasonably well with in situ snow depth measurements (from 583 stations) and with snow water equivalent from an independent analysis. Runoff skill (against naturalized stream flow observations from 15 basins in the western US) of MERRA and MERRA-Land is typically higher than that of ERA-Interim. With a few exceptions. the MERRA-Land data appear more accurate than the original MERRA estimates and are thus recommended for those interested in using '\\-tERRA output for land surface hydrological studies.

  7. Convexity Adjustments for ATS Models

    DEFF Research Database (Denmark)

    Murgoci, Agatha; Gaspar, Raquel M.

    Practitioners are used to value a broad class of exotic interest rate derivatives simply by preforming for what is known as convexity adjustments (or convexity corrections). We start by exploiting the relations between various interest rate models and their connections to measure changes. As a re......Practitioners are used to value a broad class of exotic interest rate derivatives simply by preforming for what is known as convexity adjustments (or convexity corrections). We start by exploiting the relations between various interest rate models and their connections to measure changes....... As a result we classify convexity adjustments into forward adjustments and swaps adjustments. We, then, focus on affine term structure (ATS) models and, in this context, conjecture convexity adjustments should be related of affine functionals. In the case of forward adjustments, we show how to obtain exact...... formulas. Concretely for LIBOR in arrears (LIA) contracts, we derive the system of Riccatti ODE-s one needs to compute to obtain the exact adjustment. Based upon the ideas of Schrager and Pelsser (2006) we are also able to derive general swap adjustments useful, in particular, when dealing with constant...

  8. A Filled Function with Adjustable Parameters for Unconstrained Global Optimization

    Institute of Scientific and Technical Information of China (English)

    SHANGYou-lin; LIXiao-yan

    2004-01-01

    A filled function with adjustable parameters is suggested in this paper for finding a global minimum point of a general class of nonlinear programming problems with a bounded and closed domain. This function has two adjustable parameters. We will discuss the properties of the proposed filled function. Conditions on this function and on the values of parameters are given so that the constructed function has the desired properties of traditional filled function.

  9. A New Automatic Method to Adjust Parameters for Object Recognition

    Directory of Open Access Journals (Sweden)

    Issam Qaffou

    2012-09-01

    Full Text Available To recognize an object in an image, the user must apply a combination of operators, where each operator has a set of parameters. These parameters must be “well” adjusted in order to reach good results. Usually, this adjustment is made manually by the user. In this paper we propose a new method to automate the process of parameter adjustment for an object recognition task. Our method is based on reinforcement learning, we use two types of agents: User Agent that gives the necessary information and Parameter Agent that adjusts the parameters of each operator. Due to the nature of reinforcement learning the results do not depend only on the system characteristics but also the user’s favorite choices.

  10. Adjustment or updating of models

    Indian Academy of Sciences (India)

    D J Ewins

    2000-06-01

    In this paper, first a review of the terminology used in the model adjustment or updating is presented. This is followed by an outline of the major updating algorithms cuurently available, together with a discussion of the advantages and disadvantages of each, and the current state-of-the-art of this important application and part of optimum design technology.

  11. Sensitivity of adjustment to parameter correlations and to response-parameter correlations

    Energy Technology Data Exchange (ETDEWEB)

    Wagschal, J.J. [Racah Inst. of Physics, Hebrew Univ. of Jerusalem, Edmond J. Safra Campus, Jerusalem, 91904 (Israel)

    2011-07-01

    The adjusted parameters and response, and their respective posterior uncertainties and correlations, are presented explicitly as functions of all relevant prior correlations for the two parameters, one response case. The dependence of these adjusted entities on the various prior correlations is analyzed and portrayed graphically for various valid correlation combinations on a simple criticality problem. (authors)

  12. Lumped-parameter models

    Energy Technology Data Exchange (ETDEWEB)

    Ibsen, Lars Bo; Liingaard, M.

    2006-12-15

    A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. In this technical report the steps of establishing a lumped-parameter model are presented. Following sections are included in this report: Static and dynamic formulation, Simple lumped-parameter models and Advanced lumped-parameter models. (au)

  13. Adjusting parameters of aortic valve stenosis severity by body size

    DEFF Research Database (Denmark)

    Minners, Jan; Gohlke-Baerwolf, Christa; Kaufmann, Beat A

    2014-01-01

    BACKGROUND: Adjustment of cardiac dimensions by measures of body size appears intuitively convincing and in patients with aortic stenosis, aortic valve area (AVA) is commonly adjusted by body surface area (BSA). However, there is little evidence to support such an approach. OBJECTIVE: To identify...... the adequate measure of body size for the adjustment of aortic stenosis severity. METHODS: Parameters of aortic stenosis severity (jet velocity, mean pressure gradient (MPG) and AVA) and measures of body size (height, weight, BSA and body mass index (BMI)) were analysed in 2843 consecutive patients with aortic...... stenosis (jet velocity ≥2.5 m/s) and related to outcomes in a second cohort of 1525 patients from the Simvastatin/Ezetimibe in Aortic Stenosis (SEAS) study. RESULTS: Whereas jet velocity and MPG were independent of body size, AVA was significantly correlated with height, weight, BSA and BMI (Pearson...

  14. An iteratively reweighted least-squares approach to adaptive robust adjustment of parameters in linear regression models with autoregressive and t-distributed deviations

    Science.gov (United States)

    Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza

    2017-09-01

    In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.

  15. Adjustment of Tsunami Source Parameters By Adjoint Methods

    Science.gov (United States)

    Pires, C.; Miranda, P.

    Tsunami waveforms recorded at tide gauges can be used to adjust tsunami source pa- rameters and, indirectly, seismic focal parameters. Simple inversion methods, based on ray-tracing techniques, only used a small fraction of available information. More elab- orate techniques, based on the Green's functions methods, also have some limitations in their scope. A new methodology, using a variational approach, allows for a much more general inversion, which can directly optimize focal parameters of tsunamigenic earthquakes. Idealized synthetic data and an application to the 1969 Gorringe Earth- quake are used to validate the methodology.

  16. Adjustment of endogenous concentrations in pharmacokinetic modeling.

    Science.gov (United States)

    Bauer, Alexander; Wolfsegger, Martin J

    2014-12-01

    Estimating pharmacokinetic parameters in the presence of an endogenous concentration is not straightforward as cross-reactivity in the analytical methodology prevents differentiation between endogenous and dose-related exogenous concentrations. This article proposes a novel intuitive modeling approach which adequately adjusts for the endogenous concentration. Monte Carlo simulations were carried out based on a two-compartment population pharmacokinetic (PK) model fitted to real data following intravenous administration. A constant and a proportional error model were assumed. The performance of the novel model and the method of straightforward subtraction of the observed baseline concentration from post-dose concentrations were compared in terms of terminal half-life, area under the curve from 0 to infinity, and mean residence time. Mean bias in PK parameters was up to 4.5 times better with the novel model assuming a constant error model and up to 6.5 times better assuming a proportional error model. The simulation study indicates that this novel modeling approach results in less biased and more accurate PK estimates than straightforward subtraction of the observed baseline concentration and overcomes the limitations of previously published approaches.

  17. Effect of Adjusting Pseudo-Guessing Parameter Estimates on Test Scaling When Item Parameter Drift Is Present

    Directory of Open Access Journals (Sweden)

    Kyung T. Han

    2015-07-01

    Full Text Available In item response theory test scaling/equating with the three-parameter model, the scaling coefficients A and B have no impact on the c-parameter estimates of the test items since the c-parameter estimates are not adjusted in the scaling/equating procedure. The main research question in this study concerned how serious the consequences would be if c-parameter estimates are not adjusted in the test equating procedure when item-parameter drift (IPD is present. This drift is commonly observed in equating studies and hence, has been the source of considerable research. The results from a series of Monte-Carlo simulation studies conducted under 32 different combinations of conditions showed that some calibration strategies in the study, where the c-parameters were adjusted to be identical across two test forms, resulted in more robust equating performance in the presence of IPD. This paper discusses the practical effectiveness and the theoretical importance of appropriately adjusting c-parameter estimates in equating.

  18. Lumped-parameter models

    DEFF Research Database (Denmark)

    Ibsen, Lars Bo; Liingaard, Morten

    A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. The lumped-parameter model development have been reported by (Wolf 1991b; Wolf 1991a; Wolf and Paronesso 1991; Wolf and Paronesso 19...

  19. Response model parameter linking

    NARCIS (Netherlands)

    Barrett, Michelle Derbenwick

    2015-01-01

    With a few exceptions, the problem of linking item response model parameters from different item calibrations has been conceptualized as an instance of the problem of equating observed scores on different test forms. This thesis argues, however, that the use of item response models does not require

  20. Structural-Parameter-Based Jumping-Height-and-Distance Adjustment and Obstacle Sensing of a Bio-Inspired Jumping Robot

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2015-06-01

    Full Text Available Jumping-height-and-distance (JHD active adjustment capability is important for jumping robots to overcome different sizes of obstacle. This paper proposes a new structural parameter-based JHD active adjustment approach for our previous jumping robot. First, the JHD adjustments, modifying the lengths of different legs of the robot, are modelled and simulated. Then, three mechanisms for leg-length adjustment are proposed and compared, and the screw-and-nut mechanism is selected. And for adjusting of different structural parameters using this mechanism, the one with the best JHD adjusting performance and the lowest mechanical complexity is adopted. Thirdly, an obstacle-distance-and-height (ODH detection method using only one infrared sensor is designed. Finally, the performances of the proposed methods are tested. Experimental results show that the jumping-height-and distance adjustable ranges are 0.11 m and 0.96 m, respectively, which validates the effectiveness of the proposed JHD adjustment method.

  1. STUDY ON THE RELATIONSHIP BETWEEN ADJUSTABLE OPERATIONAL PARAMETERS AND NOISE OF SINGLE-CYLINDER DIESEL ENGINE

    Institute of Scientific and Technical Information of China (English)

    何勇; 鲍一丹

    2001-01-01

    A Model S195 (8.8 kW) single cylinder was used in this study to determine the effect of four operational parameters, i.e. intake valve close angle, exhaust valve open angle, fuel delivery angle and fuel injection pressure on noise. Single factor and multi-factor quadratic regressive orthogonal methods were adopted in the experiments to find the relationship between the four parameters and noise. By means of optimization technique, the optimum operational parameters for two working conditions of the engine were selected and the test results showed that optimum adjustment could reduce noise by 2-4 dB.

  2. Distributed Parameter Modelling Applications

    DEFF Research Database (Denmark)

    2011-01-01

    Here the issue of distributed parameter models is addressed. Spatial variations as well as time are considered important. Several applications for both steady state and dynamic applications are given. These relate to the processing of oil shale, the granulation of industrial fertilizers and the d......Here the issue of distributed parameter models is addressed. Spatial variations as well as time are considered important. Several applications for both steady state and dynamic applications are given. These relate to the processing of oil shale, the granulation of industrial fertilizers...... sands processing. The fertilizer granulation model considers the dynamics of MAP-DAP (mono and diammonium phosphates) production within an industrial granulator, that involves complex crystallisation, chemical reaction and particle growth, captured through population balances. A final example considers...

  3. Zoom lens calibration with zoom- and focus-related intrinsic parameters applied to bundle adjustment

    Science.gov (United States)

    Zheng, Shunyi; Wang, Zheng; Huang, Rongyong

    2015-04-01

    A zoom lens is more flexible for photogrammetric measurements under diverse environments than a fixed lens. However, challenges in calibration of zoom-lens cameras preclude the wide use of zoom lenses in the field of close-range photogrammetry. Thus, a novel zoom lens calibration method is proposed in this study. In this method, instead of conducting modeling after monofocal calibrations, we summarize the empirical zoom/focus models of intrinsic parameters first and then incorporate these parameters into traditional collinearity equations to construct the fundamental mathematical model, i.e., collinearity equations with zoom- and focus-related intrinsic parameters. Similar to monofocal calibration, images taken at several combinations of zoom and focus settings are processed in a single self-calibration bundle adjustment. In the self-calibration bundle adjustment, three types of unknowns, namely, exterior orientation parameters, unknown space point coordinates, and model coefficients of the intrinsic parameters, are solved simultaneously. Experiments on three different digital cameras with zoom lenses support the feasibility of the proposed method, and their relative accuracies range from 1:4000 to 1:15,100. Furthermore, the nominal focal length written in the exchangeable image file header is found to lack reliability in experiments. Thereafter, the joint influence of zoom lens instability and zoom recording errors is further analyzed quantitatively. The analysis result is consistent with the experimental result and explains the reason why zoom lens calibration can never have the same accuracy as monofocal self-calibration.

  4. Two Different Bifurcation Scenarios in Neural Firing Rhythms Discovered in Biological Experiments by Adjusting Two Parameters

    Institute of Scientific and Technical Information of China (English)

    WV Xiao-Bo; MO Juan; YANG Ming-Hao; ZHENG Qiao-Hua; GU Hua-Guang; HEN Wei

    2008-01-01

    @@ Two different bifurcation scenarios, one is novel and the other is relatively simpler, in the transition procedures of neural firing patterns are studied in biological experiments on a neural pacemaker by adjusting two parameters. The experimental observations are simulated with a relevant theoretical model neuron. The deterministic non-periodic firing pattern lying within the novel bifurcation scenario is suggested to be a new case of chaos, which has not been observed in previous neurodynamical experiments.

  5. Determination and adjustment of drying parameters of Tunisian ceramic bodies

    Science.gov (United States)

    Mahmoudi, Salah; Bennour, Ali; Srasra, Ezzeddine; Zargouni, Fouad

    2016-12-01

    This work deals with the mineralogical, physico-chemical and geotechnical analyses of representative Aptian clays in the north-east of Tunisia. X-ray diffraction reveals a predominance of illite (50-60 wt%) associated with kaolinite and interstratified illite/smectite. The accessory minerals detected in raw materials are quartz, calcite and Na-feldspar. The average amounts of silica, alumina and alkalis are 52, 20 and 3.5 wt%, respectively. The contents of lime and iron vary between 4 and 8 wt %. The plasticity test shows medium values of plasticity index (16-28 wt%). The linear drying shrinkage is weak (less than 0.99 wt%) which makes these clays suitable for fast drying. The firing shrinkage and expansion are limited. A lower firing and drying temperature allow significant energy savings. Currently, these clays are used in the industry for manufacturing earthenware tiles. For the optimum exploitation of the clay materials and improvement of production conditions, a mathematical formulationis established for the drying parameters. These models predict drying shrinkage (d), bending strength after drying (b) and residual moisture (r) from initial moisture (m) and pressing pressure (p).

  6. Behavioral modeling of Digitally Adjustable Current Amplifier

    OpenAIRE

    Josef Polak; Lukas Langhammer; Jan Jerabek

    2015-01-01

    This article presents the digitally adjustable current amplifier (DACA) and its analog behavioral model (ABM), which is suitable for both ideal and advanced analyses of the function block using DACA as active element. There are four levels of this model, each being suitable for simulation of a certain degree of electronic circuits design (e.g. filters, oscillators, generators). Each model is presented through a schematic wiring in the simulation program OrCAD, including a description of equat...

  7. Behavioral modeling of Digitally Adjustable Current Amplifier

    Directory of Open Access Journals (Sweden)

    Josef Polak

    2015-03-01

    Full Text Available This article presents the digitally adjustable current amplifier (DACA and its analog behavioral model (ABM, which is suitable for both ideal and advanced analyses of the function block using DACA as active element. There are four levels of this model, each being suitable for simulation of a certain degree of electronic circuits design (e.g. filters, oscillators, generators. Each model is presented through a schematic wiring in the simulation program OrCAD, including a description of equations representing specific functions in the given level of the simulation model. The design of individual levels is always verified using PSpice simulations. The ABM model has been developed based on practically measured values of a number of DACA amplifier samples. The simulation results for proposed levels of the ABM model are shown and compared with the results of the real easurements of the active element DACA.

  8. Generalized Parameter-Adjusted Stochastic Resonance of Duffing Oscillator and Its Application to Weak-Signal Detection

    Science.gov (United States)

    Lai, Zhi-Hui; Leng, Yong-Gang

    2015-01-01

    A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application. PMID:26343671

  9. Generalized Parameter-Adjusted Stochastic Resonance of Duffing Oscillator and Its Application to Weak-Signal Detection.

    Science.gov (United States)

    Lai, Zhi-Hui; Leng, Yong-Gang

    2015-08-28

    A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application.

  10. Generalized Parameter-Adjusted Stochastic Resonance of Duffing Oscillator and Its Application to Weak-Signal Detection

    Directory of Open Access Journals (Sweden)

    Zhi-Hui Lai

    2015-08-01

    Full Text Available A two-dimensional Duffing oscillator which can produce stochastic resonance (SR is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application.

  11. Comparison of multiplicative heterogeneous variance adjustment models for genetic evaluations.

    Science.gov (United States)

    Márkus, Sz; Mäntysaari, E A; Strandén, I; Eriksson, J-Å; Lidauer, M H

    2014-06-01

    Two heterogeneous variance adjustment methods and two variance models were compared in a simulation study. The method used for heterogeneous variance adjustment in the Nordic test-day model, which is a multiplicative method based on Meuwissen (J. Dairy Sci., 79, 1996, 310), was compared with a restricted multiplicative method where the fixed effects were not scaled. Both methods were tested with two different variance models, one with a herd-year and the other with a herd-year-month random effect. The simulation study was built on two field data sets from Swedish Red dairy cattle herds. For both data sets, 200 herds with test-day observations over a 12-year period were sampled. For one data set, herds were sampled randomly, while for the other, each herd was required to have at least 10 first-calving cows per year. The simulations supported the applicability of both methods and models, but the multiplicative mixed model was more sensitive in the case of small strata sizes. Estimation of variance components for the variance models resulted in different parameter estimates, depending on the applied heterogeneous variance adjustment method and variance model combination. Our analyses showed that the assumption of a first-order autoregressive correlation structure between random-effect levels is reasonable when within-herd heterogeneity is modelled by year classes, but less appropriate for within-herd heterogeneity by month classes. Of the studied alternatives, the multiplicative method and a variance model with a random herd-year effect were found most suitable for the Nordic test-day model for dairy cattle evaluation.

  12. A NEW SOLUTION MODEL OF NONLINEAR DYNAMIC LEAST SQUARE ADJUSTMENT

    Institute of Scientific and Technical Information of China (English)

    陶华学; 郭金运

    2000-01-01

    The nonlinear least square adjustment is a head object studied in technology fields. The paper studies on the non-derivative solution to the nonlinear dynamic least square adjustment and puts forward a new algorithm model and its solution model. The method has little calculation load and is simple. This opens up a theoretical method to solve the linear dynamic least square adjustment.

  13. Quantum-inspired bacterial foraging algorithm for parameter adjustment in green cognitive radio

    Institute of Scientific and Technical Information of China (English)

    Hongyuan Gao; Chenwan Li

    2015-01-01

    Parameter adjustment that maximizes the energy effi-ciency of cognitive radio networks is studied in this paper where it can be investigated as a complex discrete optimization prob-lem. Then a quantum-inspired bacterial foraging algorithm (QBFA) is proposed. Quantum computing has perfect characteristics so as to avoid local convergence and speed up the optimization of QBFA. A proof of convergence is also given for this algorithm. The superiority of QBFA is verified by simulations on three test functions. A novel parameter adjustment method based on QBFA is proposed for resource al ocation of green cognitive radio. The proposed method can provide a global y optimal solution for pa-rameter adjustment in green cognitive radio networks. Simulation results show the proposed method can reduce energy consump-tion effectively while satisfying different quality of service (QoS) requirements.

  14. Intelligent Adjustment of Printhead Driving Waveform Parameters for 3D Electronic Printing

    Directory of Open Access Journals (Sweden)

    Lin Na

    2017-01-01

    Full Text Available In practical applications of 3D electronic printing, a major challenge is to adjust the printhead for a high print resolution and accuracy. However, an exhausting manual selective process inevitably wastes a lot of time. Therefore, in this paper, we proposed a new intelligent adjustment method, which adopts artificial bee colony algorithm to optimize the printhead driving waveform parameters for getting the desired printhead state. Experimental results show that this method can quickly and accuracy find out the suitable combination of driving waveform parameters to meet the needs of applications.

  15. PARAMETER ESTIMATION IN BREAD BAKING MODEL

    Directory of Open Access Journals (Sweden)

    Hadiyanto Hadiyanto

    2012-05-01

    Full Text Available Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally product quality parameters. There was a fair agreement between the calibrated model results and the experimental data. The results showed that the applied simple qualitative relationships for quality performed above expectation. Furthermore, it was confirmed that the microwave input is most meaningful for the internal product properties and not for the surface properties as crispness and color. The model with adjusted parameters was applied in a quality driven food process design procedure to derive a dynamic operation pattern, which was subsequently tested experimentally to calibrate the model. Despite the limited calibration with fixed operation settings, the model predicted well on the behavior under dynamic convective operation and on combined convective and microwave operation. It was expected that the suitability between model and baking system could be improved further by performing calibration experiments at higher temperature and various microwave power levels.  Abstrak  PERKIRAAN PARAMETER DALAM MODEL UNTUK PROSES BAKING ROTI. Kualitas produk roti sangat tergantung pada proses baking yang digunakan. Suatu model yang telah dikembangkan dengan metode kualitatif dan kuantitaif telah dikalibrasi dengan percobaan pada temperatur 200oC dan dengan kombinasi dengan mikrowave pada 100 Watt. Parameter-parameter model diestimasi dengan prosedur bertahap yaitu pertama, parameter pada model perpindahan masa dan panas, parameter pada model transformasi, dan

  16. Photovoltaic module parameters acquisition model

    Science.gov (United States)

    Cibira, Gabriel; Koščová, Marcela

    2014-09-01

    This paper presents basic procedures for photovoltaic (PV) module parameters acquisition using MATLAB and Simulink modelling. In first step, MATLAB and Simulink theoretical model are set to calculate I-V and P-V characteristics for PV module based on equivalent electrical circuit. Then, limited I-V data string is obtained from examined PV module using standard measurement equipment at standard irradiation and temperature conditions and stated into MATLAB data matrix as a reference model. Next, the theoretical model is optimized to keep-up with the reference model and to learn its basic parameters relations, over sparse data matrix. Finally, PV module parameters are deliverable for acquisition at different realistic irradiation, temperature conditions as well as series resistance. Besides of output power characteristics and efficiency calculation for PV module or system, proposed model validates computing statistical deviation compared to reference model.

  17. Mode choice model parameters estimation

    OpenAIRE

    Strnad, Irena

    2010-01-01

    The present work focuses on parameter estimation of two mode choice models: multinomial logit and EVA 2 model, where four different modes and five different trip purposes are taken into account. Mode choice model discusses the behavioral aspect of mode choice making and enables its application to a traffic model. Mode choice model includes mode choice affecting trip factors by using each mode and their relative importance to choice made. When trip factor values are known, it...

  18. The Use of Response Surface Methodology to Optimize Parameter Adjustments in CNC Machine Tools

    Directory of Open Access Journals (Sweden)

    Shao-Hsien Chen

    2014-01-01

    Full Text Available This paper mainly covers a research intended to improve the circular accuracy of CNC machine tools and the adjustment and analysis of the main controller parameters applied to improve accuracy. In this study, controller analysis software was used to detect the adjustment status of the servo parameters of the feed axis. According to the FANUC parameter manual, the parameter address, frequency, response measurements, and the one-fourth corner acceleration and deceleration measurements of the machine tools were adjusted. The experimental design (DOE was adopted in this study for taking circular measurements and engaging in the planning and selection of important parameter data. The Minitab R15 software was adopted to predict the experimental data analysis, while the seminormal probability map, Plato, and analysis of variance (ANOVA were adopted to determine the impacts of the significant parameter factors and the interactions among them. Additionally, based on the response surface map and contour plot, the optimal values were obtained. In addition, comparison and verification were conducted through the Taguchi method, regression analysis to improved machining accuracy and efficiency. The unadjusted error was 7.8 μm; through the regression analysis method, the error was 5.8 μm and through the Taguchi analysis method, the error was 6.4 μm.

  19. A New Method for Identifying the Model Error of Adjustment System

    Institute of Scientific and Technical Information of China (English)

    TAO Benzao; ZHANG Chaoyu

    2005-01-01

    Some theory problems affecting parameter estimation are discussed in this paper. Influence and transformation between errors of stochastic and functional models is pointed out as well. For choosing the best adjustment model, a formula, which is different from the literatures existing methods, for estimating and identifying the model error, is proposed. On the basis of the proposed formula, an effective approach of selecting the best model of adjustment system is given.

  20. Asymptotically adjusted self-consistent multiplicative parameter exchange-energy-functional method: Application to diatomic molecules

    Science.gov (United States)

    Karasiev, Valentin V.; Ludeña, Eduardo V.

    2002-03-01

    An asymptotically adjusted self-consistent α (AASCα) method is advanced for the purpose of constructing an accurate orbital-dependent local exchange potential with correct asymptotic behavior. This local potential is made up of the Slater potential plus an additional term containing a multiplicative parameter αx (a self-consistently determined orbital functional) times a local response potential that is approximated using standard exchange-energy functionals. Applications of the AASCα functionals to diatomic molecules yield significantly improved total, exchange, and atomization energies that compare quite well, but at a much lower computational cost, with those obtained by the exact orbital-dependent exchange energy treatment [S. Ivanov, S. Hirata, and R. J. Bartlett, Phys. Rev. Lett. 83, 5455 (1999); A. Görling, Phys. Rev. Lett. 83, 5459 (1999)] (in fact, the present results are very close to the Hartree-Fock ones). Moreover, because in the AASCα method the exchange potential tends toward the correct (-1/r) asymptotic behavior, the ionization potentials approximated by the negative of the highest-occupied-orbital energy have a closer agreement with experimental values than those resulting from current approximate density functionals. Finally, we show that in the context of the present method it is possible to introduce some generalizations to the Gritsenko-van Leeuwen-van Lenthe-Baerends model [O. Gritsenko, R. van Leeuwen, E. van Lenthe, and E. J. Baerends, Phys. Rev. A 51, 1944 (1995)].

  1. Reductions in particulate and NO(x) emissions by diesel engine parameter adjustments with HVO fuel.

    Science.gov (United States)

    Happonen, Matti; Heikkilä, Juha; Murtonen, Timo; Lehto, Kalle; Sarjovaara, Teemu; Larmi, Martti; Keskinen, Jorma; Virtanen, Annele

    2012-06-01

    Hydrotreated vegetable oil (HVO) diesel fuel is a promising biofuel candidate that can complement or substitute traditional diesel fuel in engines. It has been already reported that by changing the fuel from conventional EN590 diesel to HVO decreases exhaust emissions. However, as the fuels have certain chemical and physical differences, it is clear that the full advantage of HVO cannot be realized unless the engine is optimized for the new fuel. In this article, we studied how much exhaust emissions can be reduced by adjusting engine parameters for HVO. The results indicate that, with all the studied loads (50%, 75%, and 100%), particulate mass and NO(x) can both be reduced over 25% by engine parameter adjustments. Further, the emission reduction was even higher when the target for adjusting engine parameters was to exclusively reduce either particulates or NO(x). In addition to particulate mass, different indicators of particulate emissions were also compared. These indicators included filter smoke number (FSN), total particle number, total particle surface area, and geometric mean diameter of the emitted particle size distribution. As a result of this comparison, a linear correlation between FSN and total particulate surface area at low FSN region was found.

  2. Adequação dos parâmetros do modelo de Green-Ampt-Mein-Larson em condições de campo Adjustment of Green-Ampt-Mein-Larson model parameters under field conditions

    Directory of Open Access Journals (Sweden)

    João H. Zonta

    2010-10-01

    Full Text Available O modelo de Green-Ampt-Mein-Larson (GAML é um dos mais empregados na modelagem da infiltração; entretanto, alguns de seus parâmetros não condizem com a situação real do processo de infiltração da água no solo. Assim, o presente trabalho teve como objetivo avaliar o desempenho do modelo de GAML, testando diferentes combinações de metodologias de obtenção dos parâmetros de entrada. Os ensaios foram realizados em um Cambissolo Háplico Tb Distrófico Podzólico, com utilização de simulador de chuvas, em solo com e sem cobertura. Realizaram-se simulações baseadas na combinação de duas metodologias para a determinação da umidade na zona de transmissão (θt; duas para condutividade hidráulica na zona de transmissão (Kt e três para o potencial matricial na frente de umedecimento (ψf. O modelo GAML com seus parâmetros originais não obteve bom desempenho, superestimando os valores de taxa de infiltração (Ti e infiltração acumulada (I. As combinações em que se utilizaram valores de ψf calculados pela equação de Risse et al. (1995 subestimaram, durante todo o tempo, os valores de Ti e I, em ambas as condições de superfície. A combinação K0, θs e equação de Cecílio (2005 apresentou os melhores resultados.Among the several equations that are proposed for simulation of infiltration, the Green-Ampt-Mein-Larson (GAML model is one of the most used in hydrological studies as it is based on physical processes that occur in the soil during infiltration. However, some of its parameters do not match the real situation of the process of water infiltration in the soil. Thus, the objective of this work was to evaluate the performance of the GAML model, testing different methodologies for obtaining its parameters. The trials were carried out in a Podzol Tb Distrophic Haplic Cambisol, using a rain simulator, under field conditions, with and without mulch. Simulations were performed based on the combination of two

  3. Lower extremity EMG-driven modeling of walking with automated adjustment of musculoskeletal geometry.

    Science.gov (United States)

    Meyer, Andrew J; Patten, Carolynn; Fregly, Benjamin J

    2017-01-01

    Neuromusculoskeletal disorders affecting walking ability are often difficult to manage, in part due to limited understanding of how a patient's lower extremity muscle excitations contribute to the patient's lower extremity joint moments. To assist in the study of these disorders, researchers have developed electromyography (EMG) driven neuromusculoskeletal models utilizing scaled generic musculoskeletal geometry. While these models can predict individual muscle contributions to lower extremity joint moments during walking, the accuracy of the predictions can be hindered by errors in the scaled geometry. This study presents a novel EMG-driven modeling method that automatically adjusts surrogate representations of the patient's musculoskeletal geometry to improve prediction of lower extremity joint moments during walking. In addition to commonly adjusted neuromusculoskeletal model parameters, the proposed method adjusts model parameters defining muscle-tendon lengths, velocities, and moment arms. We evaluated our EMG-driven modeling method using data collected from a high-functioning hemiparetic subject walking on an instrumented treadmill at speeds ranging from 0.4 to 0.8 m/s. EMG-driven model parameter values were calibrated to match inverse dynamic moments for five degrees of freedom in each leg while keeping musculoskeletal geometry close to that of an initial scaled musculoskeletal model. We found that our EMG-driven modeling method incorporating automated adjustment of musculoskeletal geometry predicted net joint moments during walking more accurately than did the same method without geometric adjustments. Geometric adjustments improved moment prediction errors by 25% on average and up to 52%, with the largest improvements occurring at the hip. Predicted adjustments to musculoskeletal geometry were comparable to errors reported in the literature between scaled generic geometric models and measurements made from imaging data. Our results demonstrate that with

  4. Methodological aspects of journaling a dynamic adjusting entry model

    Directory of Open Access Journals (Sweden)

    Vlasta Kašparovská

    2011-01-01

    Full Text Available This paper expands the discussion of the importance and function of adjusting entries for loan receivables. Discussion of the cyclical development of adjusting entries, their negative impact on the business cycle and potential solutions has intensified during the financial crisis. These discussions are still ongoing and continue to be relevant to members of the professional public, banking regulators and representatives of international accounting institutions. The objective of this paper is to evaluate a method of journaling dynamic adjusting entries under current accounting law. It also expresses the authors’ opinions on the potential for consistently implementing basic accounting principles in journaling adjusting entries for loan receivables under a dynamic model.

  5. Sub-0.1 nm-resolution quantitative scanning transmission electron microscopy without adjustable parameters

    Energy Technology Data Exchange (ETDEWEB)

    Dwyer, C. [Monash Centre for Electron Microscopy, Monash University, Victoria 3800 (Australia); Department of Materials Engineering, Monash University, Victoria 3800 (Australia); ARC Centre of Excellence for Design in Light Metals, Monash University, Victoria 3800 (Australia); Maunders, C. [Department of Materials Engineering, Monash University, Victoria 3800 (Australia); Zheng, C. L. [Monash Centre for Electron Microscopy, Monash University, Victoria 3800 (Australia); Weyland, M.; Etheridge, J. [Monash Centre for Electron Microscopy, Monash University, Victoria 3800 (Australia); Department of Materials Engineering, Monash University, Victoria 3800 (Australia); Tiemeijer, P. C. [FEI Electron Optics, P.O. Box 80066, 5600 KA Eindhoven (Netherlands)

    2012-05-07

    Atomic-resolution imaging in the scanning transmission electron microscope (STEM) constitutes a powerful tool for nanostructure characterization. Here, we demonstrate the quantitative interpretation of atomic-resolution high-angle annular dark-field (ADF) STEM images using an approach that does not rely on adjustable parameters. We measure independently the instrumental parameters that affect sub-0.1 nm-resolution ADF images, quantify their individual and collective contributions to the image intensity, and show that knowledge of these parameters enables a quantitative interpretation of the absolute intensity and contrast across all accessible spatial frequencies. The analysis also provides a method for the in-situ measurement of the STEM's effective source distribution.

  6. Effect of Correlations Between Model Parameters and Nuisance Parameters When Model Parameters are Fit to Data

    CERN Document Server

    Roe, Byron

    2013-01-01

    The effect of correlations between model parameters and nuisance parameters is discussed, in the context of fitting model parameters to data. Modifications to the usual $\\chi^2$ method are required. Fake data studies, as used at present, will not be optimum. Problems will occur for applications of the Maltoni-Schwetz \\cite{ms} theorem. Neutrino oscillations are used as examples, but the problems discussed here are general ones, which are often not addressed.

  7. Effect of Flux Adjustments on Temperature Variability in Climate Models

    Energy Technology Data Exchange (ETDEWEB)

    Duffy, P.; Bell, J.; Covey, C.; Sloan, L.

    1999-12-27

    It has been suggested that ''flux adjustments'' in climate models suppress simulated temperature variability. If true, this might invalidate the conclusion that at least some of observed temperature increases since 1860 are anthropogenic, since this conclusion is based in part on estimates of natural temperature variability derived from flux-adjusted models. We assess variability of surface air temperatures in 17 simulations of internal temperature variability submitted to the Coupled Model Intercomparison Project. By comparing variability in flux-adjusted vs. non-flux adjusted simulations, we find no evidence that flux adjustments suppress temperature variability in climate models; other, largely unknown, factors are much more important in determining simulated temperature variability. Therefore the conclusion that at least some of observed temperature increases are anthropogenic cannot be questioned on the grounds that it is based in part on results of flux-adjusted models. Also, reducing or eliminating flux adjustments would probably do little to improve simulations of temperature variability.

  8. Cognitive Models of Risky Choice: Parameter Stability and Predictive Accuracy of Prospect Theory

    Science.gov (United States)

    Glockner, Andreas; Pachur, Thorsten

    2012-01-01

    In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are…

  9. Hard and Soft Adjusting of a Parameter With Its Known Boundaries by the Value Based on the Experts’ Estimations Limited to the Parameter

    OpenAIRE

    Romanuke Vadim V.

    2016-01-01

    Adjustment of an unknown parameter of the multistage expert procedure is considered. The lower and upper boundaries of the parameter are counted to be known. A key condition showing that experts’ estimations are satisfactory in the current procedure is an inequality, in which the value based on the estimations is not greater than the parameter. The algorithms of hard and soft adjusting are developed. If the inequality is true and its both terms are too close for a long sequence of expert proc...

  10. Adjustable Parameter-Based Distributed Fault Estimation Observer Design for Multiagent Systems With Directed Graphs.

    Science.gov (United States)

    Zhang, Ke; Jiang, Bin; Shi, Peng

    2017-02-01

    In this paper, a novel adjustable parameter (AP)-based distributed fault estimation observer (DFEO) is proposed for multiagent systems (MASs) with the directed communication topology. First, a relative output estimation error is defined based on the communication topology of MASs. Then a DFEO with AP is constructed with the purpose of improving the accuracy of fault estimation. Based on H ∞ and H 2 with pole placement, multiconstrained design is given to calculate the gain of DFEO. Finally, simulation results are presented to illustrate the feasibility and effectiveness of the proposed DFEO design with AP.

  11. Hard and Soft Adjusting of a Parameter With Its Known Boundaries by the Value Based on the Experts’ Estimations Limited to the Parameter

    Directory of Open Access Journals (Sweden)

    Romanuke Vadim V.

    2016-07-01

    Full Text Available Adjustment of an unknown parameter of the multistage expert procedure is considered. The lower and upper boundaries of the parameter are counted to be known. A key condition showing that experts’ estimations are satisfactory in the current procedure is an inequality, in which the value based on the estimations is not greater than the parameter. The algorithms of hard and soft adjusting are developed. If the inequality is true and its both terms are too close for a long sequence of expert procedures, the adjusting can be early stopped. The algorithms are reversible, implying inversion to the reverse inequality and sliding up off the lower boundary.

  12. Towards a Collision-Free WLAN: Dynamic Parameter Adjustment in CSMA/E2CA

    Directory of Open Access Journals (Sweden)

    Bellalta Boris

    2011-01-01

    Full Text Available Carrier sense multiple access with enhanced collision avoidance (CSMA/ECA is a distributed MAC protocol that allows collision-free access to the medium in WLANs. The only difference between CSMA/ECA and the well-known CSMA/CA is that the former uses a deterministic backoff after successful transmissions. Collision-free operation is reached after a transient state during which some collisions may occur. This paper shows that the duration of the transient state can be shortened by appropriately setting the contention parameters. Standard absorbing Markov chain theory is used to describe the behaviour of the system in the transient state and to predict the expected number of slots to reach the collision-free operation. The paper also introduces CSMA/E2CA, in which a deterministic backoff is used two consecutive times after a successful transmission. CSMA/E2CA converges quicker to collision-free operation and delivers higher performance than CSMA/ECA, specially in harsh wireless scenarios with high frame-error rates. The last part of the paper addresses scenarios with a large number of contenders. We suggest dynamic parameter adjustment techniques to accommodate a varying (and potentially high number of contenders. The effectiveness of these adjustments in preventing collisions is validated by means of simulation.

  13. Evaluation of the impact of adjusting the angle of the axis of a wind turbine rotor relative to the flow of air stream on operating parameters of a wind turbine model

    Directory of Open Access Journals (Sweden)

    Gumuła Stanisław

    2017-01-01

    Full Text Available The aim of this study was to determine the effect of regulation of an axis of a wind turbine rotor to the direction of wind on the volume of energy produced by wind turbines. A role of an optimal setting of the blades of the wind turbine rotor was specified, as well. According to the measurements, changes in the tilt angle of the axis of the wind turbine rotor in relation to the air stream flow direction cause changes in the use of wind energy. The publication explores the effects of the operating conditions of wind turbines on the possibility of using wind energy. A range of factors affect the operation of the wind turbine, and thus the volume of energy produced by the plant. The impact of design parameters of wind power plant, climatic factors or associated with the location seismic challenges can be shown from among them. One of the parameters has proved to be change settings of the rotor axis in relation to direction of flow of the air stream. Studies have shown that the accurate determination of the optimum angle of the axis of the rotor with respect to flow of air stream strongly influences the characteristics of the wind turbine.

  14. Bayes linear covariance matrix adjustment for multivariate dynamic linear models

    CERN Document Server

    Wilkinson, Darren J

    2008-01-01

    A methodology is developed for the adjustment of the covariance matrices underlying a multivariate constant time series dynamic linear model. The covariance matrices are embedded in a distribution-free inner-product space of matrix objects which facilitates such adjustment. This approach helps to make the analysis simple, tractable and robust. To illustrate the methods, a simple model is developed for a time series representing sales of certain brands of a product from a cash-and-carry depot. The covariance structure underlying the model is revised, and the benefits of this revision on first order inferences are then examined.

  15. Storm Water Management Model Climate Adjustment Tool (SWMM-CAT)

    Science.gov (United States)

    The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations. SWMM, first released in 1971, models hydrology and hydrauli...

  16. Parameter estimation and investigation of a bolted joint model

    Science.gov (United States)

    Shiryayev, O. V.; Page, S. M.; Pettit, C. L.; Slater, J. C.

    2007-11-01

    Mechanical joints are a primary source of variability in the dynamics of built-up structures. Physical phenomena in the joint are quite complex and therefore too impractical to model at the micro-scale. This motivates the development of lumped parameter joint models with discrete interfaces so that they can be easily implemented in finite element codes. Among the most important considerations in choosing a model for dynamically excited systems is its ability to model energy dissipation. This translates into the need for accurate and reliable methods to measure model parameters and estimate their inherent variability from experiments. The adjusted Iwan model was identified as a promising candidate for representing joint dynamics. Recent research focused on this model has exclusively employed impulse excitation in conjunction with neural networks to identify the model parameters. This paper presents an investigation of an alternative parameter estimation approach for the adjusted Iwan model, which employs data from oscillatory forcing. This approach is shown to produce parameter estimates with precision similar to the impulse excitation method for a range of model parameters.

  17. R.M. Solow Adjusted Model of Economic Growth

    Directory of Open Access Journals (Sweden)

    Ion Gh. Rosca

    2007-05-01

    Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans etc., the R.M. Solow model is part of the category which characterizes the economic growth. The paper proposes the study of the R.M. Solow adjusted model of economic growth, while the adjustment consisting in the model adaptation to the Romanian economic characteristics. The article is the first one from a three paper series dedicated to the macroeconomic modelling theme, using the R.M. Solow model, such as: “Measurement of the economic growth and extensions of the R.M. Solow adjusted model” and “Evolution scenarios at the Romanian economy level using the R.M. Solow adjusted model”. The analysis part of the model is based on the study of the equilibrium to the continuous case with some interpretations of the discreet one, by using the state diagram. The optimization problem at the economic level is also used; it is built up of a specified number of representative consumers and firms in order to reveal the interaction between these elements.

  18. Study on Optimization of Electromagnetic Relay's Reaction Torque Characteristics Based on Adjusted Parameters

    Science.gov (United States)

    Zhai, Guofu; Wang, Qiya; Ren, Wanbin

    The cooperative characteristics of electromagnetic relay's attraction torque and reaction torque are the key property to ensure its reliability, and it is important to attain better cooperative characteristics by analyzing and optimizing relay's electromagnetic system and mechanical system. From the standpoint of changing reaction torque of mechanical system, in this paper, adjusted parameters (armature's maximum angular displacement αarm_max, initial return spring's force Finiti_return_spring, normally closed (NC) contacts' force FNC_contacts, contacts' gap δgap, and normally opened (NO) contacts' over travel δNO_contacts) were adopted as design variables, and objective function was provided for with the purpose of increasing breaking velocities of both NC contacts and NO contacts. Finally, genetic algorithm (GA) was used to attain optimization of the objective function. Accuracy of calculation for the relay's dynamic characteristics was verified by experiment.

  19. Evaluation of ammonia volatilization losses by adjusted parameters of a logistic function

    Directory of Open Access Journals (Sweden)

    Marcos Lima Campos do Vale

    2014-02-01

    Full Text Available The dynamics of N losses in fertilizer by ammonia volatilization is affected by several factors, making investigation of these dynamics more complex. Moreover, some features of the behavior of the variable can lead to deviation from normal distribution, making the main commonly adopted statistical strategies inadequate for data analysis. Thus, the purpose of this study was to evaluate the patterns of cumulative N losses from urea through ammonia volatilization in order to find a more adequate and detailed way of assessing the behavior of the variable. For that reason, changes in patterns of ammonia volatilization losses as a result of applying different combinations of two soil classes [Planossolo and Chernossolo (Typic Albaqualf and Vertic Argiaquolls] and different rates of urea (50, 100 and 150 kg ha-1 N, in the presence or absence of a urease inhibitor, were evaluated, adopting a 2 × 3 × 2 factorial design with four replications. Univariate and multivariate analysis of variance were performed using the adjusted parameter values of a logistic function as a response variable. The results obtained from multivariate analysis indicated a prominent effect of the soil class factor on the set of parameters, indicating greater relevance of soil adsorption potential on ammonia volatilization losses. Univariate analysis showed that the parameters related to total N losses and rate of volatilization were more affected by soil class and the rate of urea applied. The urease inhibitor affected only the rate and inflection point parameters, decreasing the rate of losses and delaying the beginning of the process, but had no effect on total ammonia losses. Patterns of ammonia volatilization losses provide details on behavior of the variable, details which can be used to develop and adopt more accurate techniques for more efficient use of urea.

  20. Illumination-parameter adjustable and illumination-distribution visible LED helmet for low-level light therapy on brain injury

    Science.gov (United States)

    Wang, Pengbo; Gao, Yuan; Chen, Xiao; Li, Ting

    2016-03-01

    Low-level light therapy (LLLT) has been clinically applied. Recently, more and more cases are reported with positive therapeutic effect by using transcranial light emitting diodes (LEDs) illumination. Here, we developed a LLLT helmet for treating brain injuries based on LED arrays. We designed the LED arrays in circle shape and assembled them in multilayered 3D printed helmet with water-cooling module. The LED arrays can be adjust to touch the head of subjects. A control circuit was developed to drive and control the illumination of the LLLT helmet. The software portion provides the control of on and off of each LED arrays, the setup of illumination parameters, and 3D distribution of LLLT light dose in human subject according to the illumination setups. This LLLT light dose distribution was computed by a Monte Carlo model for voxelized media and the Visible Chinese Human head dataset and displayed in 3D view at the background of head anatomical structure. The performance of the whole system was fully tested. One stroke patient was recruited in the preliminary LLLT experiment and the following neuropsychological testing showed obvious improvement in memory and executive functioning. This clinical case suggested the potential of this Illumination-parameter adjustable and illuminationdistribution visible LED helmet as a reliable, noninvasive, and effective tool in treating brain injuries.

  1. Achieving high bit rate logical stochastic resonance in a bistable system by adjusting parameters

    Science.gov (United States)

    Yang, Ding-Xin; Gu, Feng-Shou; Feng, Guo-Jin; Yang, Yong-Min; Ball, Andrew

    2015-11-01

    The phenomenon of logical stochastic resonance (LSR) in a nonlinear bistable system is demonstrated by numerical simulations and experiments. However, the bit rates of the logical signals are relatively low and not suitable for practical applications. First, we examine the responses of the bistable system with fixed parameters to different bit rate logic input signals, showing that an arbitrary high bit rate LSR in a bistable system cannot be achieved. Then, a normalized transform of the LSR bistable system is introduced through a kind of variable substitution. Based on the transform, it is found that LSR for arbitrary high bit rate logic signals in a bistable system can be achieved by adjusting the parameters of the system, setting bias value and amplifying the amplitudes of logic input signals and noise properly. Finally, the desired OR and AND logic outputs to high bit rate logic inputs in a bistable system are obtained by numerical simulations. The study might provide higher feasibility of LSR in practical engineering applications. Project supported by the National Natural Science Foundation of China (Grant No. 51379526).

  2. Infrared and Raman spectra, adjusted r0 structural parameters, and vibrational assignment of isopropyl isocyanide

    Science.gov (United States)

    Sawant, Dattatray K.; Klaassen, Joshua J.; Panikar, Savitha S.; Durig, James R.

    2014-09-01

    Infrared spectra (3200-220 cm-1) of gaseous and Raman spectra (3200-40 cm-1) of liquid isopropyl isocyanide ((CH3)2CHNC) have been recorded. By utilizing the microwave rotational constants combined with the structural parameters predicted from MP2(full)/6-311+G(d,p) calculations, adjusted r0 parameters have been obtained for isopropyl isocyanide. The heavy atom distances in Å are: r (C1tbnd N2) = 1.176(3), r (N2sbnd C3) = 1.437(3), r (C3sbnd C4,5) = 1.525(5) and the angles in (°) are ∠C1N2C3 = 178.6(5); ∠N2C3C4,5 = 109.4(5); ∠C4C3C5 = 113.0(5). A complete vibrational assignment is proposed for isopropyl isocyanide based on infrared band contours, relative intensities, depolarization values, and group frequencies. The vibrational assignments were supported by normal coordinate calculation utilizing the force constants from ab inito MP2(full)/6-31G(d). The results are discussed and compared to those obtained for some similar molecules.

  3. Achieving high bit rate logical stochastic resonance in a bistable system by adjusting parameters

    Institute of Scientific and Technical Information of China (English)

    杨定新; 谷丰收; 冯国金; 杨拥民

    2015-01-01

    The phenomenon of logical stochastic resonance (LSR) in a nonlinear bistable system is demonstrated by numerical simulations and experiments. However, the bit rates of the logical signals are relatively low and not suitable for practical applications. First, we examine the responses of the bistable system with fixed parameters to different bit rate logic input signals, showing that an arbitrary high bit rate LSR in a bistable system cannot be achieved. Then, a normalized transform of the LSR bistable system is introduced through a kind of variable substitution. Based on the transform, it is found that LSR for arbitrary high bit rate logic signals in a bistable system can be achieved by adjusting the parameters of the system, setting bias value and amplifying the amplitudes of logic input signals and noise properly. Finally, the desired OR and AND logic outputs to high bit rate logic inputs in a bistable system are obtained by numerical simulations. The study might provide higher feasibility of LSR in practical engineering applications.

  4. A price adjustment process in a model of monopolistic competition

    NARCIS (Netherlands)

    J. Tuinstra

    2004-01-01

    We consider a price adjustment process in a model of monopolistic competition. Firms have incomplete information about the demand structure. When they set a price they observe the amount they can sell at that price and they observe the slope of the true demand curve at that price. With this informat

  5. Age of dam and sex of calf adjustments and genetic parameters for gestation length in Charolais cattle.

    Science.gov (United States)

    Crews, D H

    2006-01-01

    To estimate adjustment factors and genetic parameters for gestation length (GES), AI and calving date records (n = 40,356) were extracted from the Canadian Charolais Association field database. The average time from AI to calving date was 285.2 d (SD = 4.49 d) and ranged from 274 to 296 d. Fixed effects were sex of calf, age of dam (2, 3, 4, 5 to 10, > or = 11 yr), and gestation contemporary group (year of birth x herd of origin). Variance components were estimated using REML and 4 animal models (n = 84,332) containing from 0 to 3 random maternal effects. Model 1 (M1) contained only direct genetic effects. Model 2 (M2) was G1 plus maternal genetic effects with the direct x maternal genetic covariance constrained to zero, and model 3 (M3) was G2 without the covariance constraint. Model 4 (M4) extended G3 to include a random maternal permanent environmental effect. Direct heritability estimates were high and similar among all models (0.61 to 0.64), and maternal heritability estimates were low, ranging from 0.01 (M2) to 0.09 (M3). Likelihood ratio tests and parameter estimates suggested that M4 was the most appropriate (P or = 11-yr-old cows, respectively. Bivariate animal models were used to estimate genetic parameters for GES with birth and adjusted 205-d weaning weights, and postweaning gain. Direct GES was positively correlated with direct birth weight (BWT; 0.34 +/- 0.04) but negatively correlated with maternal BWT (-0.20 +/- 0.07). Maternal GES had a low, negative genetic correlation with direct BWT (-0.15 +/- 0.05) but a high and positive genetic correlation with maternal BWT (0.62 +/- 0.07). Generally, GES had near-zero genetic correlations with direct and maternal weaning weights. Results suggest that important genetic associations exist for GES with BWT, but genetic correlations with weaning weight and postweaning gain were less important.

  6. PARAMETER ESTIMATION OF ENGINEERING TURBULENCE MODEL

    Institute of Scientific and Technical Information of China (English)

    钱炜祺; 蔡金狮

    2001-01-01

    A parameter estimation algorithm is introduced and used to determine the parameters in the standard k-ε two equation turbulence model (SKE). It can be found from the estimation results that although the parameter estimation method is an effective method to determine model parameters, it is difficult to obtain a set of parameters for SKE to suit all kinds of separated flow and a modification of the turbulence model structure should be considered. So, a new nonlinear k-ε two-equation model (NNKE) is put forward in this paper and the corresponding parameter estimation technique is applied to determine the model parameters. By implementing the NNKE to solve some engineering turbulent flows, it is shown that NNKE is more accurate and versatile than SKE. Thus, the success of NNKE implies that the parameter estimation technique may have a bright prospect in engineering turbulence model research.

  7. Ascertainment-adjusted parameter estimation approach to improve robustness against misspecification of health monitoring methods

    Science.gov (United States)

    Juesas, P.; Ramasso, E.

    2016-12-01

    Condition monitoring aims at ensuring system safety which is a fundamental requirement for industrial applications and that has become an inescapable social demand. This objective is attained by instrumenting the system and developing data analytics methods such as statistical models able to turn data into relevant knowledge. One difficulty is to be able to correctly estimate the parameters of those methods based on time-series data. This paper suggests the use of the Weighted Distribution Theory together with the Expectation-Maximization algorithm to improve parameter estimation in statistical models with latent variables with an application to health monotonic under uncertainty. The improvement of estimates is made possible by incorporating uncertain and possibly noisy prior knowledge on latent variables in a sound manner. The latent variables are exploited to build a degradation model of dynamical system represented as a sequence of discrete states. Examples on Gaussian Mixture Models, Hidden Markov Models (HMM) with discrete and continuous outputs are presented on both simulated data and benchmarks using the turbofan engine datasets. A focus on the application of a discrete HMM to health monitoring under uncertainty allows to emphasize the interest of the proposed approach in presence of different operating conditions and fault modes. It is shown that the proposed model depicts high robustness in presence of noisy and uncertain prior.

  8. DESIGN OF 3D MODEL OF CUSTOMIZED ANATOMICALLY ADJUSTED IMPLANTS

    OpenAIRE

    Miodrag Manić; Zoran Stamenković; Milorad Mitković; Miloš Stojković; Duncan E.T. Shephard

    2015-01-01

    Design and manufacturing of customized implants is a field that has been rapidly developing in recent years. This paper presents an originally developed method for designing a 3D model of customized anatomically adjusted implants. The method is based upon a CT scan of a bone fracture. A CT scan is used to generate a 3D bone model and a fracture model. Using these scans, an indicated location for placing the implant is recognized and the design of a 3D model of customized implants is made. Wit...

  9. [Parameters with possible influence in PSA adjusted for transition zone volume].

    Science.gov (United States)

    Jara Rascón, J; Subirá Ríos, D; Lledó Garcia, E; Martínez Salamanca, J I; Moncada Iribarren, I; Cabello Benavente, R; Hernández Fernández, C

    2005-05-01

    To evaluate the effect of age, digital rectal examination results and prostatic volume on PSA value adjusted to transition zone (PSA-TZ) in the detection of prostatic cancer. Data of 243 patients with serum PSA of 4 to 20 ng/ml who underwent biopsy because of prostatic cancer suspicion are analyzed. In this population, cancer was detected in 62 cases (24.8%). Total prostatic volume and transition zone volume were calculated by transrectal echography applying the ellipsoid formula. Applying lineal regresion analysis, it was found no correlation between age and PSA-TZ (Pearson coefficient 0.00). By dividing these patients among those with normal rectal examination (84%) and those with suspicious digital rectal examination (16%), cutoff values of PSA-TZ were found to be not different by ROC curves analysis for 95% sensitivity varying specificity only among 24 and 26% between these two groups of patients. Prostatic size (40 cc) showed that, for obtaining the same 95% sensitivity in the detection of cancer, PSA-TZ value would require to be modified, being 0.17 in large prostates (> 40 cc) and 0.25 in small prostates (< or =40 cc). The utility of PSA-TZ as a potential predictor parameter of prostatic cancer did not need to be modified with respect to age or to data of digital rectal examination. However, for supporting sensivity of its best cutoff value, PSA-TZ would need to be modified with respect to total prostatic volume.

  10. PID 参数工程整定方法研究%Study of PID parameters adjust method

    Institute of Scientific and Technical Information of China (English)

    许雪

    2016-01-01

    This article analyzes the effect of proportion , integrate and differential control in the PID control model, introduces traditional PID parameter tuning methods, and summarizes the PID parameter empirical tuning methods, depending on the method of falling film evaporator flow level cascade control system, the main theme and the sub-tune PID parameters can be adjusted , the result indicates that the System can control the process of production smoothly.%本文分析了PID控制模型中比例积分微分的作用,介绍了传统PID参数整定的方法,并总结了PID参数经验整定方法,应用该方法对碱液降膜蒸发流量液位串级控制系统主调和副调PID参数进行调整,并取得了较稳定的控制效果。

  11. A simple approach to adjust tidal forcing in fjord models

    Science.gov (United States)

    Hjelmervik, Karina; Kristensen, Nils Melsom; Staalstrøm, André; Røed, Lars Petter

    2017-07-01

    To model currents in a fjord accurate tidal forcing is of extreme importance. Due to complex topography with narrow and shallow straits, the tides in the innermost parts of a fjord are both shifted in phase and altered in amplitude compared to the tides in the open water outside the fjord. Commonly, coastal tide information extracted from global or regional models is used on the boundary of the fjord model. Since tides vary over short distances in shallower waters close to the coast, the global and regional tidal forcings are usually too coarse to achieve sufficiently accurate tides in fjords. We present a straightforward method to remedy this problem by simply adjusting the tides to fit the observed tides at the entrance of the fjord. To evaluate the method, we present results from the Oslofjord, Norway. A model for the fjord is first run using raw tidal forcing on its open boundary. By comparing modelled and observed time series of water level at a tidal gauge station close to the open boundary of the model, a factor for the amplitude and a shift in phase are computed. The amplitude factor and the phase shift are then applied to produce adjusted tidal forcing at the open boundary. Next, we rerun the fjord model using the adjusted tidal forcing. The results from the two runs are then compared to independent observations inside the fjord in terms of amplitude and phases of the various tidal components, the total tidal water level, and the depth integrated tidal currents. The results show improvements in the modelled tides in both the outer, and more importantly, the inner parts of the fjord.

  12. 40 CFR 91.112 - Requirement of certification-adjustable parameters.

    Science.gov (United States)

    2010-07-01

    ... is permanently sealed by the manufacturer or otherwise not normally accessible using ordinary tools... adjustable range during certification, production line testing, selective enforcement auditing or any...

  13. Modeling wind adjustment factor and midflame wind speed for Rothermel's surface fire spread model

    Science.gov (United States)

    Patricia L. Andrews

    2012-01-01

    Rothermel's surface fire spread model was developed to use a value for the wind speed that affects surface fire, called midflame wind speed. Models have been developed to adjust 20-ft wind speed to midflame wind speed for sheltered and unsheltered surface fuel. In this report, Wind Adjustment Factor (WAF) model equations are given, and the BehavePlus fire modeling...

  14. Adjustment in mothers of children with Asperger syndrome: an application of the double ABCX model of family adjustment.

    Science.gov (United States)

    Pakenham, Kenneth I; Samios, Christina; Sofronoff, Kate

    2005-05-01

    The present study examined the applicability of the double ABCX model of family adjustment in explaining maternal adjustment to caring for a child diagnosed with Asperger syndrome. Forty-seven mothers completed questionnaires at a university clinic while their children were participating in an anxiety intervention. The children were aged between 10 and 12 years. Results of correlations showed that each of the model components was related to one or more domains of maternal adjustment in the direction predicted, with the exception of problem-focused coping. Hierarchical regression analyses demonstrated that, after controlling for the effects of relevant demographics, stressor severity, pile-up of demands and coping were related to adjustment. Findings indicate the utility of the double ABCX model in guiding research into parental adjustment when caring for a child with Asperger syndrome. Limitations of the study and clinical implications are discussed.

  15. On adjustment for auxiliary covariates in additive hazard models for the analysis of randomized experiments

    DEFF Research Database (Denmark)

    Vansteelandt, S.; Martinussen, Torben; Tchetgen, E. J Tchetgen

    2014-01-01

    's dependence on time or on the auxiliary covariates is misspecified, and even away from the null hypothesis of no treatment effect. We furthermore show that adjustment for auxiliary baseline covariates does not change the asymptotic variance of the estimator of the effect of a randomized treatment. We conclude......We consider additive hazard models (Aalen, 1989) for the effect of a randomized treatment on a survival outcome, adjusting for auxiliary baseline covariates. We demonstrate that the Aalen least-squares estimator of the treatment effect parameter is asymptotically unbiased, even when the hazard...... that, in view of its robustness against model misspecification, Aalen least-squares estimation is attractive for evaluating treatment effects on a survival outcome in randomized experiments, and the primary reasons to consider baseline covariate adjustment in such settings could be interest in subgroup...

  16. Robust estimation of hydrological model parameters

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-11-01

    Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.

  17. Use of generalised Procrustes analysis for the photogrammetric block adjustment by independent models

    Science.gov (United States)

    Crosilla, Fabio; Beinat, Alberto

    The paper reviews at first some aspects of the generalised Procrustes analysis (GP) and outlines the analogies with the block adjustment by independent models. On this basis, an innovative solution of the block adjustment problem by Procrustes algorithms and the related computer program implementation are presented and discussed. The main advantage of the new proposed method is that it avoids the conventional least squares solution. For this reason, linearisation algorithms and the knowledge of a priori approximate values for the unknown parameters are not required. Once the model coordinates of the tie points are available and at least three control points are known, the Procrustes algorithms can directly provide, without further information, the tie point ground coordinates and the exterior orientation parameters. Furthermore, some numerical block adjustment solutions obtained by the new method in different areas of North Italy are compared to the conventional solution. The very simple data input process, the less memory requirements, the low computing time and the same level of accuracy that characterise the new algorithm with respect to a conventional one are verified with these tests. A block adjustment of 11 models, with 44 tie points and 14 control points, takes just a few seconds on an Intel PIII 400 MHz computer, and the total data memory required is less than twice the allocated space for the input data. This is because most of the computations are carried out on data matrices of limited size, typically 3×3.

  18. PARAMETER ESTIMATION IN BREAD BAKING MODEL

    OpenAIRE

    Hadiyanto Hadiyanto; AJB van Boxtel

    2012-01-01

    Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally pro...

  19. Parameter counting in models with global symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Berger, Joshua [Institute for High Energy Phenomenology, Newman Laboratory of Elementary Particle Physics, Cornell University, Ithaca, NY 14853 (United States)], E-mail: jb454@cornell.edu; Grossman, Yuval [Institute for High Energy Phenomenology, Newman Laboratory of Elementary Particle Physics, Cornell University, Ithaca, NY 14853 (United States)], E-mail: yuvalg@lepp.cornell.edu

    2009-05-18

    We present rules for determining the number of physical parameters in models with exact flavor symmetries. In such models the total number of parameters (physical and unphysical) needed to described a matrix is less than in a model without the symmetries. Several toy examples are studied in order to demonstrate the rules. The use of global symmetries in studying the minimally supersymmetric standard model (MSSM) is examined.

  20. On parameter estimation in deformable models

    DEFF Research Database (Denmark)

    Fisker, Rune; Carstensen, Jens Michael

    1998-01-01

    Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian...... method is based on a modified version of the EM algorithm. Experimental results for a deformable template used for textile inspection are presented...

  1. Cosmological models with constant deceleration parameter

    Energy Technology Data Exchange (ETDEWEB)

    Berman, M.S.; de Mello Gomide, F.

    1988-02-01

    Berman presented elsewhere a law of variation for Hubble's parameter that yields constant deceleration parameter models of the universe. By analyzing Einstein, Pryce-Hoyle and Brans-Dicke cosmologies, we derive here the necessary relations in each model, considering a perfect fluid.

  2. Gain Parameter Adjustment Methods Comparison of Controller for Autonomous Rehabilitation Device

    OpenAIRE

    Eski, Ikbal; Kirnap, Ahmet

    2016-01-01

    PID controller design and comparison between two different gainparameter adjustment method for autonomous physical rehabilitation device ispresented in this paper. This device will be capable of doing repeatedtherapeutic exercises of shoulder joint. That devices main objective  is reducing physiotherapist work load. Thecontrollers tested with real angel values. Comparison of simulation resultsshowed Ziegler_Nichols adjustment method has better performance than Matlab'sauto-tune method.

  3. Study of dual wavelength composite output of solid state laser based on adjustment of resonator parameters

    Science.gov (United States)

    Wang, Lei; Nie, Jinsong; Wang, Xi; Hu, Yuze

    2016-10-01

    The 1064nm fundamental wave (FW) and the 532nm second harmonic wave (SHW) of Nd:YAG laser have been widely applied in many fields. In some military applications requiring interference in both visible and near-infrared spectrum range, the de-identification interference technology based on the dual wavelength composite output of FW and SHW offers an effective way of making the device or equipment miniaturized and low cost. In this paper, the application of 1064nm and 532nm dual-wavelength composite output technology in military electro-optical countermeasure is studied. A certain resonator configuration that can achieve composite laser output with high power, high beam quality and high repetition rate is proposed. Considering the thermal lens effect, the stability of this certain resonator is analyzed based on the theory of cavity transfer matrix. It shows that with the increase of thermal effect, the intracavity fundamental mode volume decreased, resulting the peak fluctuation of cavity stability parameter. To explore the impact the resonator parameters does to characteristics and output ratio of composite laser, the solid-state laser's dual-wavelength composite output models in both continuous and pulsed condition are established by theory of steady state equation and rate equation. Throughout theoretical simulation and analysis, the optimal KTP length and best FW transmissivity are obtained. The experiment is then carried out to verify the correctness of theoretical calculation result.

  4. Trait Characteristics of Diffusion Model Parameters

    Directory of Open Access Journals (Sweden)

    Anna-Lena Schubert

    2016-07-01

    Full Text Available Cognitive modeling of response time distributions has seen a huge rise in popularity in individual differences research. In particular, several studies have shown that individual differences in the drift rate parameter of the diffusion model, which reflects the speed of information uptake, are substantially related to individual differences in intelligence. However, if diffusion model parameters are to reflect trait-like properties of cognitive processes, they have to qualify as trait-like variables themselves, i.e., they have to be stable across time and consistent over different situations. To assess their trait characteristics, we conducted a latent state-trait analysis of diffusion model parameters estimated from three response time tasks that 114 participants completed at two laboratory sessions eight months apart. Drift rate, boundary separation, and non-decision time parameters showed a great temporal stability over a period of eight months. However, the coefficients of consistency and reliability were only low to moderate and highest for drift rate parameters. These results show that the consistent variance of diffusion model parameters across tasks can be regarded as temporally stable ability parameters. Moreover, they illustrate the need for using broader batteries of response time tasks in future studies on the relationship between diffusion model parameters and intelligence.

  5. Parameter identification in the logistic STAR model

    DEFF Research Database (Denmark)

    Ekner, Line Elvstrøm; Nejstgaard, Emil

    We propose a new and simple parametrization of the so-called speed of transition parameter of the logistic smooth transition autoregressive (LSTAR) model. The new parametrization highlights that a consequence of the well-known identification problem of the speed of transition parameter is that th......We propose a new and simple parametrization of the so-called speed of transition parameter of the logistic smooth transition autoregressive (LSTAR) model. The new parametrization highlights that a consequence of the well-known identification problem of the speed of transition parameter...

  6. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei

    2013-09-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  7. Parameter Estimation for Groundwater Models under Uncertain Irrigation Data.

    Science.gov (United States)

    Demissie, Yonas; Valocchi, Albert; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  8. Parameter estimation for groundwater models under uncertain irrigation data

    Science.gov (United States)

    Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  9. Application of lumped-parameter models

    Energy Technology Data Exchange (ETDEWEB)

    Ibsen, Lars Bo; Liingaard, M.

    2006-12-15

    This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil. Subsequently, the assembly of the dynamic stiffness matrix for the foundation is considered, and the solution for obtaining the steady state response, when using lumped-parameter models is given. (au)

  10. Adjustable box-wing model for solar radiation pressure impacting GPS satellites

    Science.gov (United States)

    Rodriguez-Solano, C. J.; Hugentobler, U.; Steigenberger, P.

    2012-04-01

    One of the major uncertainty sources affecting Global Positioning System (GPS) satellite orbits is the direct solar radiation pressure. In this paper a new model for the solar radiation pressure on GPS satellites is presented that is based on a box-wing satellite model, and assumes nominal attitude. The box-wing model is based on the physical interaction between solar radiation and satellite surfaces, and can be adjusted to fit the GPS tracking data. To compensate the effects of solar radiation pressure, the International GNSS Service (IGS) analysis centers employ a variety of approaches, ranging from purely empirical models based on in-orbit behavior, to physical models based on pre-launch spacecraft structural analysis. It has been demonstrated, however, that the physical models fail to predict the real orbit behavior with sufficient accuracy, mainly due to deviations from nominal attitude, inaccurately known optical properties, or aging of the satellite surfaces. The adjustable box-wing model presented in this paper is an intermediate approach between the physical/analytical models and the empirical models. The box-wing model fits the tracking data by adjusting mainly the optical properties of the satellite's surfaces. In addition, the so called Y-bias and a parameter related to a rotation lag angle of the solar panels around their rotation axis (about 1.5° for Block II/IIA and 0.5° for Block IIR) are estimated. This last parameter, not previously identified for GPS satellites, is a key factor for precise orbit determination. For this study GPS orbits are generated based on one year (2007) of tracking data, with the processing scheme derived from the Center for Orbit Determination in Europe (CODE). Two solutions are computed, one using the adjustable box-wing model and one using the CODE empirical model. Using this year of data the estimated parameters and orbits are analyzed. The performance of the models is comparable, when looking at orbit overlap and orbit

  11. Statefinder parameters in two dark energy models

    CERN Document Server

    Panotopoulos, Grigoris

    2007-01-01

    The statefinder parameters ($r,s$) in two dark energy models are studied. In the first, we discuss in four-dimensional General Relativity a two fluid model, in which dark energy and dark matter are allowed to interact with each other. In the second model, we consider the DGP brane model generalized by taking a possible energy exchange between the brane and the bulk into account. We determine the values of the statefinder parameters that correspond to the unique attractor of the system at hand. Furthermore, we produce plots in which we show $s,r$ as functions of red-shift, and the ($s-r$) plane for each model.

  12. Parameter Symmetry of the Interacting Boson Model

    CERN Document Server

    Shirokov, A M; Smirnov, Yu F; Shirokov, Andrey M.; Smirnov, Yu. F.

    1998-01-01

    We discuss the symmetry of the parameter space of the interacting boson model (IBM). It is shown that for any set of the IBM Hamiltonian parameters (with the only exception of the U(5) dynamical symmetry limit) one can always find another set that generates the equivalent spectrum. We discuss the origin of the symmetry and its relevance for physical applications.

  13. Wind Farm Decentralized Dynamic Modeling With Parameters

    DEFF Research Database (Denmark)

    Soltani, Mohsen; Shakeri, Sayyed Mojtaba; Grunnet, Jacob Deleuran;

    2010-01-01

    Development of dynamic wind flow models for wind farms is part of the research in European research FP7 project AEOLUS. The objective of this report is to provide decentralized dynamic wind flow models with parameters. The report presents a structure for decentralized flow models with inputs from...

  14. Setting Parameters for Biological Models With ANIMO

    NARCIS (Netherlands)

    Schivo, Stefano; Scholma, Jetse; Karperien, Hermanus Bernardus Johannes; Post, Janine Nicole; van de Pol, Jan Cornelis; Langerak, Romanus; André, Étienne; Frehse, Goran

    2014-01-01

    ANIMO (Analysis of Networks with Interactive MOdeling) is a software for modeling biological networks, such as e.g. signaling, metabolic or gene networks. An ANIMO model is essentially the sum of a network topology and a number of interaction parameters. The topology describes the interactions

  15. Constructing stochastic models from deterministic process equations by propensity adjustment

    Directory of Open Access Journals (Sweden)

    Wu Jialiang

    2011-11-01

    Full Text Available Abstract Background Gillespie's stochastic simulation algorithm (SSA for chemical reactions admits three kinds of elementary processes, namely, mass action reactions of 0th, 1st or 2nd order. All other types of reaction processes, for instance those containing non-integer kinetic orders or following other types of kinetic laws, are assumed to be convertible to one of the three elementary kinds, so that SSA can validly be applied. However, the conversion to elementary reactions is often difficult, if not impossible. Within deterministic contexts, a strategy of model reduction is often used. Such a reduction simplifies the actual system of reactions by merging or approximating intermediate steps and omitting reactants such as transient complexes. It would be valuable to adopt a similar reduction strategy to stochastic modelling. Indeed, efforts have been devoted to manipulating the chemical master equation (CME in order to achieve a proper propensity function for a reduced stochastic system. However, manipulations of CME are almost always complicated, and successes have been limited to relative simple cases. Results We propose a rather general strategy for converting a deterministic process model into a corresponding stochastic model and characterize the mathematical connections between the two. The deterministic framework is assumed to be a generalized mass action system and the stochastic analogue is in the format of the chemical master equation. The analysis identifies situations: where a direct conversion is valid; where internal noise affecting the system needs to be taken into account; and where the propensity function must be mathematically adjusted. The conversion from deterministic to stochastic models is illustrated with several representative examples, including reversible reactions with feedback controls, Michaelis-Menten enzyme kinetics, a genetic regulatory motif, and stochastic focusing. Conclusions The construction of a stochastic

  16. Model for Adjustment of Aggregate Forecasts using Fuzzy Logic

    Directory of Open Access Journals (Sweden)

    Taracena–Sanz L. F.

    2010-07-01

    Full Text Available This research suggests a contribution in the implementation of forecasting models. The proposed model is developed with the aim to fit the projection of demand to surroundings of firms, and this is based on three considerations that cause that in many cases the forecasts of the demand are different from reality, such as: 1 one of the problems most difficult to model in the forecasts is the uncertainty related to the information available; 2 the methods traditionally used by firms for the projection of demand mainly are based on past behavior of the market (historical demand; and 3 these methods do not consider in their analysis the factors that are influencing so that the observed behaviour occurs. Therefore, the proposed model is based on the implementation of Fuzzy Logic, integrating the main variables that affect the behavior of market demand, and which are not considered in the classical statistical methods. The model was applied to a bottling of carbonated beverages, and with the adjustment of the projection of demand a more reliable forecast was obtained.

  17. DESIGN OF 3D MODEL OF CUSTOMIZED ANATOMICALLY ADJUSTED IMPLANTS

    Directory of Open Access Journals (Sweden)

    Miodrag Manić

    2015-12-01

    Full Text Available Design and manufacturing of customized implants is a field that has been rapidly developing in recent years. This paper presents an originally developed method for designing a 3D model of customized anatomically adjusted implants. The method is based upon a CT scan of a bone fracture. A CT scan is used to generate a 3D bone model and a fracture model. Using these scans, an indicated location for placing the implant is recognized and the design of a 3D model of customized implants is made. With this method it is possible to design volumetric implants used for replacing a part of the bone or a plate type for fixation of a bone part. The sides of the implants, this one lying on the bone, are fully aligned with the anatomical shape of the bone surface which neighbors the fracture. The given model is designed for implants production utilizing any method, and it is ideal for 3D printing of implants.

  18. Delineating Parameter Unidentifiabilities in Complex Models

    CERN Document Server

    Raman, Dhruva V; Papachristodoulou, Antonis

    2016-01-01

    Scientists use mathematical modelling to understand and predict the properties of complex physical systems. In highly parameterised models there often exist relationships between parameters over which model predictions are identical, or nearly so. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, and the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast timescale subsystems, as well as the regimes in which such approximations are valid. We base our algorithm on a novel quantification of regional parametric sensitivity: multiscale sloppiness. Traditional...

  19. Parameter Estimation, Model Reduction and Quantum Filtering

    CERN Document Server

    Chase, Bradley A

    2009-01-01

    This dissertation explores the topics of parameter estimation and model reduction in the context of quantum filtering. Chapters 2 and 3 provide a review of classical and quantum probability theory, stochastic calculus and filtering. Chapter 4 studies the problem of quantum parameter estimation and introduces the quantum particle filter as a practical computational method for parameter estimation via continuous measurement. Chapter 5 applies these techniques in magnetometry and studies the estimator's uncertainty scalings in a double-pass atomic magnetometer. Chapter 6 presents an efficient feedback controller for continuous-time quantum error correction. Chapter 7 presents an exact model of symmetric processes of collective qubit systems.

  20. PERMINTAAN BERAS DI PROVINSI JAMBI (Penerapan Partial Adjustment Model

    Directory of Open Access Journals (Sweden)

    Wasi Riyanto

    2013-07-01

    Full Text Available The purpose of this study is to determine the effect of price of rice, flour prices, population, income of population and demand of rice for a year earlier on rice demand, demand rice elasticity and rice demand prediction in Jambi Province. This study uses secondary data, including time series data for 22 years from 1988 until 2009. The study used some variables, consist of rice demand (Qdt, the price of rice (Hb, the price of wheat flour (Hg, population (Jp, the income of the population (PDRB and demand for rice the previous year (Qdt-1. The make of this study are multiple regression and dynamic analysis a Partial Adjustment Model, where the demand for rice is the dependent variable and the price of rice, flour prices, population, income population and demand of rice last year was the independent variable. Partial Adjustment Model analysis results showed that the effect of changes in prices of rice and flour are not significant to changes in demand for rice. The population and demand of rice the previous year has positive and significant impact on demand for rice, while revenues have negative and significant population of rice demand. Variable price of rice, earning population and the price of flour is inelastic the demand of rice, because rice is not a normal good but as a necessity so that there is no substitution of goods (replacement of rice with other commodities in Jambi Province. Based on the analysis, it is recommended to the government to be able to control the rate of population increase given the variable number of people as one of the factors that affect demand for rice.It is expected that the government also began to socialize in a lifestyle of non-rice food consumption to control the increasing amount of demand for rice. Last suggestion, the government developed a diversification of staple foods other than rice.

  1. Parameter Estimation for Thurstone Choice Models

    Energy Technology Data Exchange (ETDEWEB)

    Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-24

    We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.

  2. Adjusting the Parameters of PI Regulator in All-electrical Tank Gun Control System by Computer Program

    Institute of Scientific and Technical Information of China (English)

    ZANG Ke-mao; HE Yue; LI Kuang-cheng; LI Chang-bing

    2005-01-01

    Both the traverse subsystem and the elevation subsystem of the all-electrical tank gun control system are composed of electrical drive control system respectively. The parameters of PI regulator in these electrical drive control systems affect the performance of the control system seriously. Up to now, there is not a simple and practical method for choosing regulator parameters, which are usually determined by repeated and continual readjustment. This method is low efficient, and the parameters got are not always optimal. A method for on-line adjusting the parameters of PI regulator in the electrical drive control system by computer program is introduced in this paper. The function of adjusting PI parameters of the electrical drive control system is realized by PC program written by VC+ + and controlling program written by assemble language and by the communication between PC and DSP completed by the control MSCOMM in VC+ +6.0. The method as mentioned above which is applied for an all-electrical tank gun control system under development is proved very available, a better performance might be obtained for the all-electrical tank gun control system easily.

  3. Hydrologic modeling using elevationally adjusted NARR and NARCCAP regional climate-model simulations: Tucannon River, Washington

    Science.gov (United States)

    Praskievicz, Sarah; Bartlein, Patrick

    2014-09-01

    An emerging approach to downscaling the projections from General Circulation Models (GCMs) to scales relevant for basin hydrology is to use output of GCMs to force higher-resolution Regional Climate Models (RCMs). With spatial resolution often in the tens of kilometers, however, even RCM output will likely fail to resolve local topography that may be climatically significant in high-relief basins. Here we develop and apply an approach for downscaling RCM output using local topographic lapse rates (empirically-estimated spatially and seasonally variable changes in climate variables with elevation). We calculate monthly local topographic lapse rates from the 800-m Parameter-elevation Regressions on Independent Slopes Model (PRISM) dataset, which is based on regressions of observed climate against topographic variables. We then use these lapse rates to elevationally correct two sources of regional climate-model output: (1) the North American Regional Reanalysis (NARR), a retrospective dataset produced from a regional forecasting model constrained by observations, and (2) a range of baseline climate scenarios from the North American Regional Climate Change Assessment Program (NARCCAP), which is produced by a series of RCMs driven by GCMs. By running a calibrated and validated hydrologic model, the Soil and Water Assessment Tool (SWAT), using observed station data and elevationally-adjusted NARR and NARCCAP output, we are able to estimate the sensitivity of hydrologic modeling to the source of the input climate data. Topographic correction of regional climate-model data is a promising method for modeling the hydrology of mountainous basins for which no weather station datasets are available or for simulating hydrology under past or future climates.

  4. Delineating parameter unidentifiabilities in complex models

    Science.gov (United States)

    Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis

    2017-03-01

    Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.

  5. Systematic parameter inference in stochastic mesoscopic modeling

    Science.gov (United States)

    Lei, Huan; Yang, Xiu; Li, Zhen; Karniadakis, George Em

    2017-02-01

    We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are "sparse". The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.

  6. MODELING PARAMETERS OF ARC OF ELECTRIC ARC FURNACE

    Directory of Open Access Journals (Sweden)

    R.N. Khrestin

    2015-08-01

    Full Text Available Purpose. The aim is to build a mathematical model of the electric arc of arc furnace (EAF. The model should clearly show the relationship between the main parameters of the arc. These parameters determine the properties of the arc and the possibility of optimization of melting mode. Methodology. We have built a fairly simple model of the arc, which satisfies the above requirements. The model is designed for the analysis of electromagnetic processes arc of varying length. We have compared the results obtained when testing the model with the results obtained on actual furnaces. Results. During melting in real chipboard under the influence of changes in temperature changes its properties arc plasma. The proposed model takes into account these changes. Adjusting the length of the arc is the main way to regulate the mode of smelting chipboard. The arc length is controlled by the movement of the drive electrode. The model reflects the dynamic changes in the parameters of the arc when changing her length. We got the dynamic current-voltage characteristics (CVC of the arc for the different stages of melting. We got the arc voltage waveform and identified criteria by which possible identified stage of smelting. Originality. In contrast to the previously known models, this model clearly shows the relationship between the main parameters of the arc EAF: arc voltage Ud, amperage arc id and length arc d. Comparison of the simulation results and experimental data obtained from real particleboard showed the adequacy of the constructed model. It was found that character of change of magnitude Md, helps determine the stage of melting. Practical value. It turned out that the model can be used to simulate smelting in EAF any capacity. Thus, when designing the system of control mechanism for moving the electrode, the model takes into account changes in the parameters of the arc and it can significantly reduce electrode material consumption and energy consumption

  7. Application of lumped-parameter models

    DEFF Research Database (Denmark)

    Ibsen, Lars Bo; Liingaard, Morten

    This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil (section 1.1). Subse...

  8. Models and parameters for environmental radiological assessments

    Energy Technology Data Exchange (ETDEWEB)

    Miller, C W [ed.

    1984-01-01

    This book presents a unified compilation of models and parameters appropriate for assessing the impact of radioactive discharges to the environment. Models examined include those developed for the prediction of atmospheric and hydrologic transport and deposition, for terrestrial and aquatic food-chain bioaccumulation, and for internal and external dosimetry. Chapters have been entered separately into the data base. (ACR)

  9. Dose Adjustment Strategy of Cyclosporine A in Renal Transplant Patients: Evaluation of Anthropometric Parameters for Dose Adjustment and C0 vs. C2 Monitoring in Japan, 2001-2010

    Science.gov (United States)

    Kokuhu, Takatoshi; Fukushima, Keizo; Ushigome, Hidetaka; Yoshimura, Norio; Sugioka, Nobuyuki

    2013-01-01

    The optimal use and monitoring of cyclosporine A (CyA) have remained unclear and the current strategy of CyA treatment requires frequent dose adjustment following an empirical initial dosage adjusted for total body weight (TBW). The primary aim of this study was to evaluate age and anthropometric parameters as predictors for dose adjustment of CyA; and the secondary aim was to compare the usefulness of the concentration at predose (C0) and 2-hour postdose (C2) monitoring. An open-label, non-randomized, retrospective study was performed in 81 renal transplant patients in Japan during 2001-2010. The relationships between the area under the blood concentration-time curve (AUC0-9) of CyA and its C0 or C2 level were assessed with a linear regression analysis model. In addition to age, 7 anthropometric parameters were tested as predictors for AUC0-9 of CyA: TBW, height (HT), body mass index (BMI), body surface area (BSA), ideal body weight (IBW), lean body weight (LBW), and fat free mass (FFM). Correlations between AUC0-9 of CyA and these parameters were also analyzed with a linear regression model. The rank order of the correlation coefficient was C0 > C2 (C0; r=0.6273, C2; r=0.5562). The linear regression analyses between AUC0-9 of CyA and candidate parameters indicated their potential usefulness from the following rank order: IBW > FFM > HT > BSA > LBW > TBW > BMI > Age. In conclusion, after oral administration, C2 monitoring has a large variation and could be at high risk for overdosing. Therefore, after oral dosing of CyA, it was not considered to be a useful approach for single monitoring, but should rather be used with C0 monitoring. The regression analyses between AUC0-9 of CyA and anthropometric parameters indicated that IBW was potentially the superior predictor for dose adjustment of CyA in an empiric strategy using TBW (IBW; r=0.5181, TBW; r=0.3192); however, this finding seems to lack the pharmacokinetic rationale and thus warrants further basic and clinical

  10. An algorithm for automatic parameter adjustment for brain extraction in BrainSuite

    Science.gov (United States)

    Rajagopal, Gautham; Joshi, Anand A.; Leahy, Richard M.

    2017-02-01

    Brain Extraction (classification of brain and non-brain tissue) of MRI brain images is a crucial pre-processing step necessary for imaging-based anatomical studies of the human brain. Several automated methods and software tools are available for performing this task, but differences in MR image parameters (pulse sequence, resolution) and instrumentand subject-dependent noise and artefacts affect the performance of these automated methods. We describe and evaluate a method that automatically adapts the default parameters of the Brain Surface Extraction (BSE) algorithm to optimize a cost function chosen to reflect accurate brain extraction. BSE uses a combination of anisotropic filtering, Marr-Hildreth edge detection, and binary morphology for brain extraction. Our algorithm automatically adapts four parameters associated with these steps to maximize the brain surface area to volume ratio. We evaluate the method on a total of 109 brain volumes with ground truth brain masks generated by an expert user. A quantitative evaluation of the performance of the proposed algorithm showed an improvement in the mean (s.d.) Dice coefficient from 0.8969 (0.0376) for default parameters to 0.9509 (0.0504) for the optimized case. These results indicate that automatic parameter optimization can result in significant improvements in definition of the brain mask.

  11. Proximal Alternating Direction Method with Relaxed Proximal Parameters for the Least Squares Covariance Adjustment Problem

    Directory of Open Access Journals (Sweden)

    Minghua Xu

    2014-01-01

    Full Text Available We consider the problem of seeking a symmetric positive semidefinite matrix in a closed convex set to approximate a given matrix. This problem may arise in several areas of numerical linear algebra or come from finance industry or statistics and thus has many applications. For solving this class of matrix optimization problems, many methods have been proposed in the literature. The proximal alternating direction method is one of those methods which can be easily applied to solve these matrix optimization problems. Generally, the proximal parameters of the proximal alternating direction method are greater than zero. In this paper, we conclude that the restriction on the proximal parameters can be relaxed for solving this kind of matrix optimization problems. Numerical experiments also show that the proximal alternating direction method with the relaxed proximal parameters is convergent and generally has a better performance than the classical proximal alternating direction method.

  12. Autonomous Parameter Adjustment for SSVEP-Based BCIs with a Novel BCI Wizard.

    Science.gov (United States)

    Gembler, Felix; Stawicki, Piotr; Volosyak, Ivan

    2015-01-01

    Brain-Computer Interfaces (BCIs) transfer human brain activities into computer commands and enable a communication channel without requiring movement. Among other BCI approaches, steady-state visual evoked potential (SSVEP)-based BCIs have the potential to become accurate, assistive technologies for persons with severe disabilities. Those systems require customization of different kinds of parameters (e.g., stimulation frequencies). Calibration usually requires selecting predefined parameters by experienced/trained personnel, though in real-life scenarios an interface allowing people with no experience in programming to set up the BCI would be desirable. Another occurring problem regarding BCI performance is BCI illiteracy (also called BCI deficiency). Many articles reported that BCI control could not be achieved by a non-negligible number of users. In order to bypass those problems we developed a SSVEP-BCI wizard, a system that automatically determines user-dependent key-parameters to customize SSVEP-based BCI systems. This wizard was tested and evaluated with 61 healthy subjects. All subjects were asked to spell the phrase "RHINE WAAL UNIVERSITY" with a spelling application after key parameters were determined by the wizard. Results show that all subjects were able to control the spelling application. A mean (SD) accuracy of 97.14 (3.73)% was reached (all subjects reached an accuracy above 85% and 25 subjects even reached 100% accuracy).

  13. Autonomous parameter adjustment for SSVEP-based BCIs with a novel BCI Wizard

    Directory of Open Access Journals (Sweden)

    Felix eGembler

    2015-12-01

    Full Text Available Brain-computer interfaces (BCIs transfer human brain activities into computer commands and enable a communication channel without requiring movement.Among other BCI approaches, steady-state visual evoked potential (SSVEP-based BCIs have the potential to become accurate, assistive technologies for persons with severe disabilities.Those systems require customization of different kinds of parameters (e.g. stimulation frequencies. Calibration usually requires selecting predefined parameters by experienced/trained personnel, though in real-life scenarios an interface allowing people with no experience in programming to set up the BCI would be desirable. Another occurring problem regarding BCI performance is BCI illiteracy (also called BCI deficiency. Many articles reported that BCI control could not be achieved by a non-negligible number of users. In order to bypass those problems we developed a SSVEP-BCI wizard, a system that automatically determines user-dependent key-parameters to customize SSVEP-based BCI systems. This wizard was tested and evaluated with 61 healthy subjects. All subjects were asked to spell the phrase ``RHINE WAAL UNIVERSITY'' with a spelling application after key parameters were determined by the wizard. Results show that all subjects were able to control the spelling application. A mean (SD accuracy of 97.14 (3.73% was reached (all subjects reached an accuracy above 85% and 25 subjects even reached 100% accuracy.

  14. Estimation of Model Parameters for Steerable Needles

    Science.gov (United States)

    Park, Wooram; Reed, Kyle B.; Okamura, Allison M.; Chirikjian, Gregory S.

    2010-01-01

    Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%. PMID:21643451

  15. Estimation of Model Parameters for Steerable Needles.

    Science.gov (United States)

    Park, Wooram; Reed, Kyle B; Okamura, Allison M; Chirikjian, Gregory S

    2010-01-01

    Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%.

  16. An Optimization Model of Tunnel Support Parameters

    Directory of Open Access Journals (Sweden)

    Su Lijuan

    2015-05-01

    Full Text Available An optimization model was developed to obtain the ideal values of the primary support parameters of tunnels, which are wide-ranging in high-speed railway design codes when the surrounding rocks are at the III, IV, and V levels. First, several sets of experiments were designed and simulated using the FLAC3D software under an orthogonal experimental design. Six factors, namely, level of surrounding rock, buried depth of tunnel, lateral pressure coefficient, anchor spacing, anchor length, and shotcrete thickness, were considered. Second, a regression equation was generated by conducting a multiple linear regression analysis following the analysis of the simulation results. Finally, the optimization model of support parameters was obtained by solving the regression equation using the least squares method. In practical projects, the optimized values of support parameters could be obtained by integrating known parameters into the proposed model. In this work, the proposed model was verified on the basis of the Liuyang River Tunnel Project. Results show that the optimization model significantly reduces related costs. The proposed model can also be used as a reliable reference for other high-speed railway tunnels.

  17. Permintaan Beras di Provinsi Jambi (Penerapan Partial Adjustment Model

    Directory of Open Access Journals (Sweden)

    Wasi Riyanto

    2013-07-01

    Full Text Available The purpose of this study is to determine the effect of price of rice, flour prices, population, income of population and demand of rice for a year earlier on rice demand, demand rice elasticity and rice demand prediction in Jambi Province. This study uses secondary data, including time series data for 22 years from 1988 until 2009. The study used some variables, consist of rice demand (Qdt, the price of rice (Hb, the price of wheat flour (Hg, population (Jp, the income of the population (PDRB and demand for rice the previous year (Qdt-1. The make of this study are multiple regression and dynamic analysis  a Partial Adjustment Model, where the demand for rice is the dependent variable and the price of rice, flour prices, population, income population and demand of rice last year was the independent variable. Partial Adjustment Model analysis results showed that the effect of changes in prices of rice  and flour are not significant  to  changes in demand for rice. The population and demand of rice the previous year has positive and significant impact on demand for rice, while revenues have negative and significant population of rice demand. Variable price of rice, earning population and the price of flour is inelastic the demand of rice, because rice is not a normal good but as a necessity so that there is no substitution of goods (replacement of rice with other commodities in Jambi Province. Based on the analysis, it is recommended to the government to be able to control the rate of population increase given the variable number of people as one of the factors that affect demand for rice.It is expected that the  government also began  to  socialize  in a lifestyle  of  non-rice food consumption to control the increasing amount of demand for rice. Last suggestion, the government developed a diversification of staple foods other than rice. Keywords: Demand, Rice, Income Population

  18. Analysis of Modeling Parameters on Threaded Screws.

    Energy Technology Data Exchange (ETDEWEB)

    Vigil, Miquela S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brake, Matthew Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vangoethem, Douglas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-06-01

    Assembled mechanical systems often contain a large number of bolted connections. These bolted connections (joints) are integral aspects of the load path for structural dynamics, and, consequently, are paramount for calculating a structure's stiffness and energy dissipation prop- erties. However, analysts have not found the optimal method to model appropriately these bolted joints. The complexity of the screw geometry cause issues when generating a mesh of the model. This paper will explore different approaches to model a screw-substrate connec- tion. Model parameters such as mesh continuity, node alignment, wedge angles, and thread to body element size ratios are examined. The results of this study will give analysts a better understanding of the influences of these parameters and will aide in finding the optimal method to model bolted connections.

  19. Adjusting inkjet printhead parameters to deposit drugs into micro-sized reservoirs

    Directory of Open Access Journals (Sweden)

    Mau Robert

    2016-09-01

    Full Text Available Drug delivery systems (DDS ensure that therapeutically effective drug concentrations are delivered locally to the target site. For that reason, it is common to coat implants with a degradable polymer which contains drugs. However, the use of polymers as a drug carrier has been associated with adverse side effects. For that reason, several technologies have been developed to design polymer-free DDS. In literature it has been shown that micro-sized reservoirs can be applied as drug reservoirs. Inkjet techniques are capable of depositing drugs into these reservoirs. In this study, two different geometries of micro-sized reservoirs have been laden with a drug (ASA using a drop-on-demand inkjet printhead. Correlations between the characteristics of the drug solution, the operating parameters of the printhead and the geometric parameters of the reservoir are shown. It is indicated that wettability of the surface play a key role for drug deposition into micro-sized reservoirs.

  20. Using Green's Functions to initialize and adjust a global, eddying ocean biogeochemistry general circulation model

    Science.gov (United States)

    Brix, H.; Menemenlis, D.; Hill, C.; Dutkiewicz, S.; Jahn, O.; Wang, D.; Bowman, K.; Zhang, H.

    2015-11-01

    The NASA Carbon Monitoring System (CMS) Flux Project aims to attribute changes in the atmospheric accumulation of carbon dioxide to spatially resolved fluxes by utilizing the full suite of NASA data, models, and assimilation capabilities. For the oceanic part of this project, we introduce ECCO2-Darwin, a new ocean biogeochemistry general circulation model based on combining the following pre-existing components: (i) a full-depth, eddying, global-ocean configuration of the Massachusetts Institute of Technology general circulation model (MITgcm), (ii) an adjoint-method-based estimate of ocean circulation from the Estimating the Circulation and Climate of the Ocean, Phase II (ECCO2) project, (iii) the MIT ecosystem model "Darwin", and (iv) a marine carbon chemistry model. Air-sea gas exchange coefficients and initial conditions of dissolved inorganic carbon, alkalinity, and oxygen are adjusted using a Green's Functions approach in order to optimize modeled air-sea CO2 fluxes. Data constraints include observations of carbon dioxide partial pressure (pCO2) for 2009-2010, global air-sea CO2 flux estimates, and the seasonal cycle of the Takahashi et al. (2009) Atlas. The model sensitivity experiments (or Green's Functions) include simulations that start from different initial conditions as well as experiments that perturb air-sea gas exchange parameters and the ratio of particulate inorganic to organic carbon. The Green's Functions approach yields a linear combination of these sensitivity experiments that minimizes model-data differences. The resulting initial conditions and gas exchange coefficients are then used to integrate the ECCO2-Darwin model forward. Despite the small number (six) of control parameters, the adjusted simulation is significantly closer to the data constraints (37% cost function reduction, i.e., reduction in the model-data difference, relative to the baseline simulation) and to independent observations (e.g., alkalinity). The adjusted air-sea gas

  1. Adjusting for unmeasured confounding due to either of two crossed factors with a logistic regression model.

    Science.gov (United States)

    Li, Li; Brumback, Babette A; Weppelmann, Thomas A; Morris, J Glenn; Ali, Afsar

    2016-08-15

    Motivated by an investigation of the effect of surface water temperature on the presence of Vibrio cholerae in water samples collected from different fixed surface water monitoring sites in Haiti in different months, we investigated methods to adjust for unmeasured confounding due to either of the two crossed factors site and month. In the process, we extended previous methods that adjust for unmeasured confounding due to one nesting factor (such as site, which nests the water samples from different months) to the case of two crossed factors. First, we developed a conditional pseudolikelihood estimator that eliminates fixed effects for the levels of each of the crossed factors from the estimating equation. Using the theory of U-Statistics for independent but non-identically distributed vectors, we show that our estimator is consistent and asymptotically normal, but that its variance depends on the nuisance parameters and thus cannot be easily estimated. Consequently, we apply our estimator in conjunction with a permutation test, and we investigate use of the pigeonhole bootstrap and the jackknife for constructing confidence intervals. We also incorporate our estimator into a diagnostic test for a logistic mixed model with crossed random effects and no unmeasured confounding. For comparison, we investigate between-within models extended to two crossed factors. These generalized linear mixed models include covariate means for each level of each factor in order to adjust for the unmeasured confounding. We conduct simulation studies, and we apply the methods to the Haitian data. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Disaster Hits Home: A Model of Displaced Family Adjustment after Hurricane Katrina

    Science.gov (United States)

    Peek, Lori; Morrissey, Bridget; Marlatt, Holly

    2011-01-01

    The authors explored individual and family adjustment processes among parents (n = 30) and children (n = 55) who were displaced to Colorado after Hurricane Katrina. Drawing on in-depth interviews with 23 families, this article offers an inductive model of displaced family adjustment. Four stages of family adjustment are presented in the model: (a)…

  3. Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model.

    Science.gov (United States)

    Laury, Marie L; Wang, Lee-Ping; Pande, Vijay S; Head-Gordon, Teresa; Ponder, Jay W

    2015-07-23

    A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. An automated procedure, ForceBalance, is used to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimental data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The AMOEBA14 model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures from 249 to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to experimental properties as a function of temperature, including the second virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient, and dielectric constant. The viscosity, self-diffusion constant, and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2-20 water molecules, the AMOEBA14 model yields results similar to AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model.

  4. Dolphins adjust species-specific frequency parameters to compensate for increasing background noise.

    Science.gov (United States)

    Papale, Elena; Gamba, Marco; Perez-Gil, Monica; Martin, Vidal Martel; Giacoma, Cristina

    2015-01-01

    An increase in ocean noise levels could interfere with acoustic communication of marine mammals. In this study we explored the effects of anthropogenic and natural noise on the acoustic properties of a dolphin communication signal, the whistle. A towed array with four elements was used to record environmental background noise and whistles of short-beaked common-, Atlantic spotted- and striped-dolphins in the Canaries archipelago. Four frequency parameters were measured from each whistle, while Sound Pressure Levels (SPL) of the background noise were measured at the central frequencies of seven one-third octave bands, from 5 to 20 kHz. Results show that dolphins increase the whistles' frequency parameters with lower variability in the presence of anthropogenic noise, and increase the end frequency of their whistles when confronted with increasing natural noise. This study provides the first evidence that the synergy among SPLs has a role in shaping the whistles' structure of these three species, with respect to both natural and anthropogenic noise.

  5. An automated approach for tone mapping operator parameter adjustment in security applications

    Science.gov (United States)

    Krasula, LukáÅ.¡; Narwaria, Manish; Le Callet, Patrick

    2014-05-01

    High Dynamic Range (HDR) imaging has been gaining popularity in recent years. Different from the traditional low dynamic range (LDR), HDR content tends to be visually more appealing and realistic as it can represent the dynamic range of the visual stimuli present in the real world. As a result, more scene details can be faithfully reproduced. As a direct consequence, the visual quality tends to improve. HDR can be also directly exploited for new applications such as video surveillance and other security tasks. Since more scene details are available in HDR, it can help in identifying/tracking visual information which otherwise might be difficult with typical LDR content due to factors such as lack/excess of illumination, extreme contrast in the scene, etc. On the other hand, with HDR, there might be issues related to increased privacy intrusion. To display the HDR content on the regular screen, tone-mapping operators (TMO) are used. In this paper, we present the universal method for TMO parameters tuning, in order to maintain as many details as possible, which is desirable in security applications. The method's performance is verified on several TMOs by comparing the outcomes from tone-mapping with default and optimized parameters. The results suggest that the proposed approach preserves more information which could be of advantage for security surveillance but, on the other hand, makes us consider possible increase in privacy intrusion.

  6. The Lund Model at Nonzero Impact Parameter

    CERN Document Server

    Janik, R A; Janik, Romuald A.; Peschanski, Robi

    2003-01-01

    We extend the formulation of the longitudinal 1+1 dimensional Lund model to nonzero impact parameter using the minimal area assumption. Complete formulae for the string breaking probability and the momenta of the produced mesons are derived using the string worldsheet Minkowskian helicoid geometry. For strings stretched into the transverse dimension, we find probability distribution with slope linear in m_T similar to the statistical models but without any thermalization assumptions.

  7. IMPROVEMENT OF FLUID PIPE LUMPED PARAMETER MODEL

    Institute of Scientific and Technical Information of China (English)

    Kong Xiaowu; Wei Jianhua; Qiu Minxiu; Wu Genmao

    2004-01-01

    The traditional lumped parameter model of fluid pipe is introduced and its drawbacks are pointed out.Furthermore, two suggestions are put forward to remove these drawbacks.Firstly, the structure of equivalent circuit is modified, and then the evaluation of equivalent fluid resistance is change to take the frequency-dependent friction into account.Both simulation and experiment prove that this model is precise to characterize the dynamic behaviors of fluid in pipe.

  8. Consistent Stochastic Modelling of Meteocean Design Parameters

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Sterndorff, M. J.

    2000-01-01

    Consistent stochastic models of metocean design parameters and their directional dependencies are essential for reliability assessment of offshore structures. In this paper a stochastic model for the annual maximum values of the significant wave height, and the associated wind velocity, current...... velocity, and water level is presented. The stochastic model includes statistical uncertainty and dependency between the four stochastic variables. Further, a new stochastic model for annual maximum directional significant wave heights is presented. The model includes dependency between the maximum wave...... height from neighboring directional sectors. Numerical examples are presented where the models are calibrated using the Maximum Likelihood method to data from the central part of the North Sea. The calibration of the directional distributions is made such that the stochastic model for the omnidirectional...

  9. Setting of Agricultural Insurance Premium Rate and the Adjustment Model

    Institute of Scientific and Technical Information of China (English)

    HUANG Ya-lin

    2012-01-01

    First,using the law of large numbers,I analyze the setting principle of agricultural insurance premium rate,and take the case of setting of adult sow premium rate for study,to draw the conclusion that with the continuous promotion of agricultural insurance,increase in the types of agricultural insurance and increase in the number of the insured,the premium rate should also be adjusted opportunely.Then,on the basis of Bayes’ theorem,I adjust and calibrate the claim frequency and the average claim,in order to correctly adjust agricultural insurance premium rate;take the case of forest insurance for premium rate adjustment analysis.In setting and adjustment of agricultural insurance premium rate,in order to make the expected results well close to the real results,it is necessary to apply the probability estimates in a large number of risk units;focus on the establishment of agricultural risk database,to timely adjust agricultural insurance premium rate.

  10. More Efficient Bayesian-based Optimization and Uncertainty Assessment of Hydrologic Model Parameters

    Science.gov (United States)

    2012-02-01

    is more objective, repeatable, and better capitalizes on the computational capacity of the modern computer) is an active area of research and...existence of multiple local optima , non-smooth objective function surfaces, and long valleys in parameter space that are a result of excessive parameter...outputs, structural aspects of the model, as well as its input dataset, model parameters that are adjustable through the calibration process, and the

  11. Order Parameters of the Dilute A Models

    CERN Document Server

    Warnaar, S O; Seaton, K A; Nienhuis, B

    1993-01-01

    The free energy and local height probabilities of the dilute A models with broken $\\Integer_2$ symmetry are calculated analytically using inversion and corner transfer matrix methods. These models possess four critical branches. The first two branches provide new realisations of the unitary minimal series and the other two branches give a direct product of this series with an Ising model. We identify the integrable perturbations which move the dilute A models away from the critical limit. Generalised order parameters are defined and their critical exponents extracted. The associated conformal weights are found to occur on the diagonal of the relevant Kac table. In an appropriate regime the dilute A$_3$ model lies in the universality class of the Ising model in a magnetic field. In this case we obtain the magnetic exponent $\\delta=15$ directly, without the use of scaling relations.

  12. Testing Linear Models for Ability Parameters in Item Response Models

    NARCIS (Netherlands)

    Glas, Cees A.W.; Hendrawan, Irene

    2005-01-01

    Methods for testing hypotheses concerning the regression parameters in linear models for the latent person parameters in item response models are presented. Three tests are outlined: A likelihood ratio test, a Lagrange multiplier test and a Wald test. The tests are derived in a marginal maximum like

  13. Comparison of Inorganic Carbon System Parameters Measured in the Atlantic Ocean from 1990 to 1998 and Recommended Adjustments

    Energy Technology Data Exchange (ETDEWEB)

    Wanninkhof, R.

    2003-05-21

    As part of the global synthesis effort sponsored by the Global Carbon Cycle project of the National Oceanic and Atmospheric Administration (NOAA) and U.S. Department of Energy, a comprehensive comparison was performed of inorganic carbon parameters measured on oceanographic surveys carried out under auspices of the Joint Global Ocean Flux Study and related programs. Many of the cruises were performed as part of the World Hydrographic Program of the World Ocean Circulation Experiment and the NOAA Ocean-Atmosphere Carbon Exchange Study. Total dissolved inorganic carbon (DIC), total alkalinity (TAlk), fugacity of CO{sub 2}, and pH data from twenty-three cruises were checked to determine whether there were systematic offsets of these parameters between cruises. The focus was on the DIC and TAlk state variables. Data quality and offsets of DIC and TAlk were determined by using several different techniques. One approach was based on crossover analyses, where the deep-water concentrations of DIC and TAlk were compared for stations on different cruises that were within 100 km of each other. Regional comparisons were also made by using a multiple-parameter linear regression technique in which DIC or TAlk was regressed against hydrographic and nutrient parameters. When offsets of greater than 4 {micro}mol/kg were observed for DIC and/or 6 {micro}mol/kg were observed for TAlk, the data taken on the cruise were closely scrutinized to determine whether the offsets were systematic. Based on these analyses, the DIC data and TAlk data of three cruises were deemed of insufficient quality to be included in the comprehensive basinwide data set. For several of the cruises, small adjustments in TAlk were recommended for consistency with other cruises in the region. After these adjustments were incorporated, the inorganic carbon data from all cruises along with hydrographic, chlorofluorocarbon, and nutrient data were combined as a research quality product for the scientific community.

  14. Modelling spin Hamiltonian parameters of molecular nanomagnets.

    Science.gov (United States)

    Gupta, Tulika; Rajaraman, Gopalan

    2016-07-12

    Molecular nanomagnets encompass a wide range of coordination complexes possessing several potential applications. A formidable challenge in realizing these potential applications lies in controlling the magnetic properties of these clusters. Microscopic spin Hamiltonian (SH) parameters describe the magnetic properties of these clusters, and viable ways to control these SH parameters are highly desirable. Computational tools play a proactive role in this area, where SH parameters such as isotropic exchange interaction (J), anisotropic exchange interaction (Jx, Jy, Jz), double exchange interaction (B), zero-field splitting parameters (D, E) and g-tensors can be computed reliably using X-ray structures. In this feature article, we have attempted to provide a holistic view of the modelling of these SH parameters of molecular magnets. The determination of J includes various class of molecules, from di- and polynuclear Mn complexes to the {3d-Gd}, {Gd-Gd} and {Gd-2p} class of complexes. The estimation of anisotropic exchange coupling includes the exchange between an isotropic metal ion and an orbitally degenerate 3d/4d/5d metal ion. The double-exchange section contains some illustrative examples of mixed valance systems, and the section on the estimation of zfs parameters covers some mononuclear transition metal complexes possessing very large axial zfs parameters. The section on the computation of g-anisotropy exclusively covers studies on mononuclear Dy(III) and Er(III) single-ion magnets. The examples depicted in this article clearly illustrate that computational tools not only aid in interpreting and rationalizing the observed magnetic properties but possess the potential to predict new generation MNMs.

  15. Adjustment of regional climate model output for modeling the climatic mass balance of all glaciers on Svalbard.

    Science.gov (United States)

    Möller, Marco; Obleitner, Friedrich; Reijmer, Carleen H; Pohjola, Veijo A; Głowacki, Piotr; Kohler, Jack

    2016-05-27

    Large-scale modeling of glacier mass balance relies often on the output from regional climate models (RCMs). However, the limited accuracy and spatial resolution of RCM output pose limitations on mass balance simulations at subregional or local scales. Moreover, RCM output is still rarely available over larger regions or for longer time periods. This study evaluates the extent to which it is possible to derive reliable region-wide glacier mass balance estimates, using coarse resolution (10 km) RCM output for model forcing. Our data cover the entire Svalbard archipelago over one decade. To calculate mass balance, we use an index-based model. Model parameters are not calibrated, but the RCM air temperature and precipitation fields are adjusted using in situ mass balance measurements as reference. We compare two different calibration methods: root mean square error minimization and regression optimization. The obtained air temperature shifts (+1.43°C versus +2.22°C) and precipitation scaling factors (1.23 versus 1.86) differ considerably between the two methods, which we attribute to inhomogeneities in the spatiotemporal distribution of the reference data. Our modeling suggests a mean annual climatic mass balance of -0.05 ± 0.40 m w.e. a(-1) for Svalbard over 2000-2011 and a mean equilibrium line altitude of 452 ± 200 m  above sea level. We find that the limited spatial resolution of the RCM forcing with respect to real surface topography and the usage of spatially homogeneous RCM output adjustments and mass balance model parameters are responsible for much of the modeling uncertainty. Sensitivity of the results to model parameter uncertainty is comparably small and of minor importance.

  16. Systematic parameter inference in stochastic mesoscopic modeling

    CERN Document Server

    Lei, Huan; Li, Zhen; Karniadakis, George

    2016-01-01

    We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are sparse. The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space....

  17. Modelling tourists arrival using time varying parameter

    Science.gov (United States)

    Suciptawati, P.; Sukarsa, K. G.; Kencana, Eka N.

    2017-06-01

    The importance of tourism and its related sectors to support economic development and poverty reduction in many countries increase researchers’ attentions to study and model tourists’ arrival. This work is aimed to demonstrate time varying parameter (TVP) technique to model the arrival of Korean’s tourists to Bali. The number of Korean tourists whom visiting Bali for period January 2010 to December 2015 were used to model the number of Korean’s tourists to Bali (KOR) as dependent variable. The predictors are the exchange rate of Won to IDR (WON), the inflation rate in Korea (INFKR), and the inflation rate in Indonesia (INFID). Observing tourists visit to Bali tend to fluctuate by their nationality, then the model was built by applying TVP and its parameters were approximated using Kalman Filter algorithm. The results showed all of predictor variables (WON, INFKR, INFID) significantly affect KOR. For in-sample and out-of-sample forecast with ARIMA’s forecasted values for the predictors, TVP model gave mean absolute percentage error (MAPE) as much as 11.24 percent and 12.86 percent, respectively.

  18. Genetic Parameters of Pre-adjusted Body Weight Growth and Ultrasound Measures of Body Tissue Development in Three Seedstock Pig Breed Populations in Korea

    Directory of Open Access Journals (Sweden)

    Yun Ho Choy

    2015-12-01

    Full Text Available The objective of this study was to compare the effects of body weight growth adjustment methods on genetic parameters of body growth and tissue among three pig breeds. Data collected on 101,820 Landrace, 281,411 Yorkshire, and 78,068 Duroc pigs, born in Korean swine breeder farms since 2000, were analyzed. Records included body weights on test day and amplitude (A-mode ultrasound carcass measures of backfat thickness (BF, eye muscle area (EMA, and retail cut percentage (RCP. Days to 90 kg body weight (DAYS90, through an adjustment of the age based on the body weight at the test day, were obtained. Ultrasound measures were also pre-adjusted (ABF, EMA, AEMA, ARCP based on their test day measures. The (covariance components were obtained with 3 multi-trait animal models using the REMLF90 software package. Model I included DAYS90 and ultrasound traits, whereas model II and III accounted DAYS90 and pre-adjusted ultrasound traits. Fixed factors were sex (sex and contemporary groups (herd-year-month of birth for all traits among the models. Additionally, model I and II considered a linear covariate of final weight on the ultrasound measure traits. Heritability (h2 estimates for DAYS90, BF, EMA, and RCP ranged from 0.36 to 0.42, 0.34 to 0.43, 0.20 to 0.22, and 0.39 to 0.45, respectively, among the models. The h2 estimates of DAYS90 from model II and III were also somewhat similar. The h2 for ABF, AEMA, and ARCP were 0.35 to 0.44, 0.20 to 0.25, and 0.41 to 0.46, respectively. Our heritability estimates varied mostly among the breeds. The genetic correlations (rG were moderately negative between DAYS90 and BF (−0.29 to −0.38, and between DAYS90 and EMA (−0.16 to −0.26. BF had strong rG with RCP (−0.87 to −0.93. Moderately positive rG existed between DAYS90 and RCP (0.20 to 0.28 and between EMA and RCP (0.35 to 0.44 among the breeds. For DAYS90, model II and III, its correlations with ABF, AEMA, and ARCP were mostly low or negligible except the

  19. Parameter estimation, model reduction and quantum filtering

    Science.gov (United States)

    Chase, Bradley A.

    This thesis explores the topics of parameter estimation and model reduction in the context of quantum filtering. The last is a mathematically rigorous formulation of continuous quantum measurement, in which a stream of auxiliary quantum systems is used to infer the state of a target quantum system. Fundamental quantum uncertainties appear as noise which corrupts the probe observations and therefore must be filtered in order to extract information about the target system. This is analogous to the classical filtering problem in which techniques of inference are used to process noisy observations of a system in order to estimate its state. Given the clear similarities between the two filtering problems, I devote the beginning of this thesis to a review of classical and quantum probability theory, stochastic calculus and filtering. This allows for a mathematically rigorous and technically adroit presentation of the quantum filtering problem and solution. Given this foundation, I next consider the related problem of quantum parameter estimation, in which one seeks to infer the strength of a parameter that drives the evolution of a probe quantum system. By embedding this problem in the state estimation problem solved by the quantum filter, I present the optimal Bayesian estimator for a parameter when given continuous measurements of the probe system to which it couples. For cases when the probe takes on a finite number of values, I review a set of sufficient conditions for asymptotic convergence of the estimator. For a continuous-valued parameter, I present a computational method called quantum particle filtering for practical estimation of the parameter. Using these methods, I then study the particular problem of atomic magnetometry and review an experimental method for potentially reducing the uncertainty in the estimate of the magnetic field beyond the standard quantum limit. The technique involves double-passing a probe laser field through the atomic system, giving

  20. Sensitivity assessment, adjustment, and comparison of mathematical models describing the migration of pesticides in soil using lysimetric data

    Science.gov (United States)

    Shein, E. V.; Kokoreva, A. A.; Gorbatov, V. S.; Umarova, A. B.; Kolupaeva, V. N.; Perevertin, K. A.

    2009-07-01

    The water block of physically founded models of different levels (chromatographic PEARL models and dual-porosity MACRO models) was parameterized using laboratory experimental data and tested using the results of studying the water regime of loamy soddy-podzolic soil in large lysimeters of the Experimental Soil Station of Moscow State University. The models were adapted using a stepwise approach, which involved the sequential assessment and adjustment of each submodel. The models unadjusted for the water block underestimated the lysimeter flow and overestimated the soil water content. The theoretical necessity of the model adjustment was explained by the different scales of the experimental objects (soil samples) and simulated phenomenon (soil profile). The adjustment of the models by selecting the most sensitive hydrophysical parameters of the soils (the approximation parameters of the soil water retention curve (SWRC)) gave good agreement between the predicted moisture profiles and their actual values. In distinction from the PEARL model, the MARCO model reliably described the migration of a pesticide through the soil profile, which confirmed the necessity of physically founded models accounting for the separation of preferential flows in the pore space for the prediction, analysis, optimization, and management of modern agricultural technologies.

  1. Processing Approach of Non-linear Adjustment Models in the Space of Non-linear Models

    Institute of Scientific and Technical Information of China (English)

    LI Chaokui; ZHU Qing; SONG Chengfang

    2003-01-01

    This paper investigates the mathematic features of non-linear models and discusses the processing way of non-linear factors which contributes to the non-linearity of a nonlinear model. On the basis of the error definition, this paper puts forward a new adjustment criterion, SGPE.Last, this paper investigates the solution of a non-linear regression model in the non-linear model space and makes the comparison between the estimated values in non-linear model space and those in linear model space.

  2. Delayed heart rate recovery after exercise as a risk factor of incident type 2 diabetes mellitus after adjusting for glycometabolic parameters in men.

    Science.gov (United States)

    Yu, Tae Yang; Jee, Jae Hwan; Bae, Ji Cheol; Hong, Won-Jung; Jin, Sang-Man; Kim, Jae Hyeon; Lee, Moon-Kyu

    2016-10-15

    Some studies have reported that delayed heart rate recovery (HRR) after exercise is associated with incident type 2 diabetes mellitus (T2DM). This study aimed to investigate the longitudinal association of delayed HRR following a graded exercise treadmill test (GTX) with the development of T2DM including glucose-associated parameters as an adjusting factor in healthy Korean men. Analyses including fasting plasma glucose, HOMA-IR, HOMA-β, and HbA1c as confounding factors and known confounders were performed. HRR was calculated as peak heart rate minus heart rate after a 1-min rest (HRR 1). Cox proportional hazards model was used to quantify the independent association between HRR and incident T2DM. During 9082 person-years of follow-up between 2006 and 2012, there were 180 (10.1%) incident cases of T2DM. After adjustment for age, BMI, systolic BP, diastolic BP, smoking status, peak heart rate, peak oxygen uptake, TG, LDL-C, HDL-C, fasting plasma glucose, HOMA-IR, HOMA-β, and HbA1c, the hazard ratios (HRs) [95% confidence interval (CI)] of incident T2DM comparing the second and third tertiles to the first tertile of HRR 1 were 0.867 (0.609-1.235) and 0.624 (0.426-0.915), respectively (p for trend=0.017). As a continuous variable, in the fully-adjusted model, the HR (95% CI) of incident T2DM associated with each 1 beat increase in HRR 1 was 0.980 (0.960-1.000) (p=0.048). This study demonstrated that delayed HRR after exercise predicts incident T2DM in men, even after adjusting for fasting glucose, HOMA-IR, HOMA-β, and HbA1c. However, only HRR 1 had clinical significance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Adjusting kinematics and kinetics in a feedback-controlled toe walking model

    Directory of Open Access Journals (Sweden)

    Olenšek Andrej

    2012-08-01

    Full Text Available Abstract Background In clinical gait assessment, the correct interpretation of gait kinematics and kinetics has a decisive impact on the success of the therapeutic programme. Due to the vast amount of information from which primary anomalies should be identified and separated from secondary compensatory changes, as well as the biomechanical complexity and redundancy of the human locomotion system, this task is considerably challenging and requires the attention of an experienced interdisciplinary team of experts. The ongoing research in the field of biomechanics suggests that mathematical modeling may facilitate this task. This paper explores the possibility of generating a family of toe walking gait patterns by systematically changing selected parameters of a feedback-controlled model. Methods From the selected clinical case of toe walking we identified typical toe walking characteristics and encoded them as a set of gait-oriented control objectives to be achieved in a feedback-controlled walking model. They were defined as fourth order polynomials and imposed via feedback control at the within-step control level. At the between-step control level, stance leg lengthening velocity at the end of the single support phase was adaptively adjusted after each step so as to facilitate gait velocity control. Each time the gait velocity settled at the desired value, selected intra-step gait characteristics were modified by adjusting the polynomials so as to mimic the effect of a typical therapeutical intervention - inhibitory casting. Results By systematically adjusting the set of control parameters we were able to generate a family of gait kinematic and kinetic patterns that exhibit similar principal toe walking characteristics, as they were recorded by means of an instrumented gait analysis system in the selected clinical case of toe walking. We further acknowledge that they to some extent follow similar improvement tendencies as those which one can

  4. Multi-scale calculation of settling speed of coarse particles by accelerated Stokesian dynamics without adjustable parameter

    Institute of Scientific and Technical Information of China (English)

    Long Wang; Jiachun Li; Jifu Zhou

    2009-01-01

    The calculation of settling speed of coarse parti-cles is firstly addressed, with accelerated Stokesian dynamics without adjustable parameters, in which far field force act-ing on the particle instead of particle velocity is chosen as dependent variables to consider inter-particle hydrodynamic interactions. The sedimentation of a simple cubic array of spherical particles is simulated and compared to the results available to verify and validate the numerical code and computational scheme. The improved method keeps the same computational cost of the order O(Nlog N) as usual accelerated Stokesian dynamics does. Then, more realistic random suspension sedimentation is investigated with the help of Mont Carlo method. The computational results agree well with experimental fitting. Finally, the sedimentation of finer cohesive particle, which is often observed in estuary environment, is presented as a further application in coastal engineering.

  5. Parameter optimization in S-system models

    Directory of Open Access Journals (Sweden)

    Vasconcelos Ana

    2008-04-01

    Full Text Available Abstract Background The inverse problem of identifying the topology of biological networks from their time series responses is a cornerstone challenge in systems biology. We tackle this challenge here through the parameterization of S-system models. It was previously shown that parameter identification can be performed as an optimization based on the decoupling of the differential S-system equations, which results in a set of algebraic equations. Results A novel parameterization solution is proposed for the identification of S-system models from time series when no information about the network topology is known. The method is based on eigenvector optimization of a matrix formed from multiple regression equations of the linearized decoupled S-system. Furthermore, the algorithm is extended to the optimization of network topologies with constraints on metabolites and fluxes. These constraints rejoin the system in cases where it had been fragmented by decoupling. We demonstrate with synthetic time series why the algorithm can be expected to converge in most cases. Conclusion A procedure was developed that facilitates automated reverse engineering tasks for biological networks using S-systems. The proposed method of eigenvector optimization constitutes an advancement over S-system parameter identification from time series using a recent method called Alternating Regression. The proposed method overcomes convergence issues encountered in alternate regression by identifying nonlinear constraints that restrict the search space to computationally feasible solutions. Because the parameter identification is still performed for each metabolite separately, the modularity and linear time characteristics of the alternating regression method are preserved. Simulation studies illustrate how the proposed algorithm identifies the correct network topology out of a collection of models which all fit the dynamical time series essentially equally well.

  6. Modeling of Parameters of Subcritical Assembly SAD

    CERN Document Server

    Petrochenkov, S; Puzynin, I

    2005-01-01

    The accepted conceptual design of the experimental Subcritical Assembly in Dubna (SAD) is based on the MOX core with a nominal unit capacity of 25 kW (thermal). This corresponds to the multiplication coefficient $k_{\\rm eff} =0.95$ and accelerator beam power 1 kW. A subcritical assembly driven with the existing 660 MeV proton accelerator at the Joint Institute for Nuclear Research has been modelled in order to make choice of the optimal parameters for the future experiments. The Monte Carlo method was used to simulate neutron spectra, energy deposition and doses calculations. Some of the calculation results are presented in the paper.

  7. Unscented Kalman filter with parameter identifiability analysis for the estimation of multiple parameters in kinetic models

    OpenAIRE

    Baker Syed; Poskar C; Junker Björn

    2011-01-01

    Abstract In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. Wh...

  8. Moose models with vanishing $S$ parameter

    CERN Document Server

    Casalbuoni, R; Dominici, Daniele

    2004-01-01

    In the linear moose framework, which naturally emerges in deconstruction models, we show that there is a unique solution for the vanishing of the $S$ parameter at the lowest order in the weak interactions. We consider an effective gauge theory based on $K$ SU(2) gauge groups, $K+1$ chiral fields and electroweak groups $SU(2)_L$ and $U(1)_Y$ at the ends of the chain of the moose. $S$ vanishes when a link in the moose chain is cut. As a consequence one has to introduce a dynamical non local field connecting the two ends of the moose. Then the model acquires an additional custodial symmetry which protects this result. We examine also the possibility of a strong suppression of $S$ through an exponential behavior of the link couplings as suggested by Randall Sundrum metric.

  9. Model parameters for simulation of physiological lipids

    Science.gov (United States)

    McGlinchey, Nicholas

    2016-01-01

    Coarse grain simulation of proteins in their physiological membrane environment can offer insight across timescales, but requires a comprehensive force field. Parameters are explored for multicomponent bilayers composed of unsaturated lipids DOPC and DOPE, mixed‐chain saturation POPC and POPE, and anionic lipids found in bacteria: POPG and cardiolipin. A nonbond representation obtained from multiscale force matching is adapted for these lipids and combined with an improved bonding description of cholesterol. Equilibrating the area per lipid yields robust bilayer simulations and properties for common lipid mixtures with the exception of pure DOPE, which has a known tendency to form nonlamellar phase. The models maintain consistency with an existing lipid–protein interaction model, making the force field of general utility for studying membrane proteins in physiologically representative bilayers. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26864972

  10. The Optimal Solution of the Model with Physical and Human Capital Adjustment Costs

    Institute of Scientific and Technical Information of China (English)

    RAO Lan-lan; CAI Dong-han

    2004-01-01

    We prove that the model with physical and human capital adjustment costs has optimal solution when the production function is increasing return and the structure of vetor fields of the model changes substantially when the prodution function from decreasing return turns to increasing return.And it is shown that the economy is improved when the coefficients of adjustment costs become small.

  11. TruMicro Series 2000 sub-400 fs class industrial fiber lasers: adjustment of laser parameters to process requirements

    Science.gov (United States)

    Kanal, Florian; Kahmann, Max; Tan, Chuong; Diekamp, Holger; Jansen, Florian; Scelle, Raphael; Budnicki, Aleksander; Sutter, Dirk

    2017-02-01

    The matchless properties of ultrashort laser pulses, such as the enabling of cold processing and non-linear absorption, pave the way to numerous novel applications. Ultrafast lasers arrived in the last decade at a level of reliability suitable for the industrial environment.1 Within the next years many industrial manufacturing processes in several markets will be replaced by laser-based processes due to their well-known benefits: These are non-contact wear-free processing, higher process accuracy or an increase of processing speed and often improved economic efficiency compared to conventional processes. Furthermore, new processes will arise with novel sources, addressing previously unsolved challenges. One technical requirement for these exciting new applications will be to optimize the large number of available parameters to the requirements of the application. In this work we present an ultrafast laser system distinguished by its capability to combine high flexibility and real time process-inherent adjustments of the parameters with industry-ready reliability. This industry-ready reliability is ensured by a long experience in designing and building ultrashort-pulse lasers in combination with rigorous optimization of the mechanical construction, optical components and the entire laser head for continuous performance. By introducing a new generation of mechanical design in the last few years, TRUMPF enabled its ultrashort-laser platforms to fulfill the very demanding requirements for passively coupling high-energy single-mode radiation into a hollow-core transport fiber. The laser architecture presented here is based on the all fiber MOPA (master oscillator power amplifier) CPA (chirped pulse amplification) technology. The pulses are generated in a high repetition rate mode-locked fiber oscillator also enabling flexible pulse bursts (groups of multiple pulses) with 20 ns intra-burst pulse separation. An external acousto-optic modulator (XAOM) enables linearization

  12. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V., E-mail: Yu.Kuyanov@gmail.com; Tkachenko, N. P. [Institute for High Energy Physics, National Research Center Kurchatov Institute, COMPAS Group (Russian Federation)

    2015-12-15

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  13. AN OPTIMAL METHOD FOR ADJUSTING THE CENTERING PARAMETER IN THE WIDE-NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM FOR LINEAR PROGRAMMING

    Institute of Scientific and Technical Information of China (English)

    Wen-bao Ai

    2004-01-01

    In this paper we present a dynamic optimal method for adjusting the centering pa-rameter in the wide-neighborhood primal-dual interior-point algorithms for linear pro-gramming, while the centering parameter is generally a constant in the classical wide-neighborhood primal-dual interior-point algorithms. The computational results show thatthe new method is more efficient.

  14. Uncertainty Quantification for Optical Model Parameters

    CERN Document Server

    Lovell, A E; Sarich, J; Wild, S M

    2016-01-01

    Although uncertainty quantification has been making its way into nuclear theory, these methods have yet to be explored in the context of reaction theory. For example, it is well known that different parameterizations of the optical potential can result in different cross sections, but these differences have not been systematically studied and quantified. The purpose of this work is to investigate the uncertainties in nuclear reactions that result from fitting a given model to elastic-scattering data, as well as to study how these uncertainties propagate to the inelastic and transfer channels. We use statistical methods to determine a best fit and create corresponding 95\\% confidence bands. A simple model of the process is fit to elastic-scattering data and used to predict either inelastic or transfer cross sections. In this initial work, we assume that our model is correct, and the only uncertainties come from the variation of the fit parameters. We study a number of reactions involving neutron and deuteron p...

  15. Numerical modeling of partial discharges parameters

    Directory of Open Access Journals (Sweden)

    Kartalović Nenad M.

    2016-01-01

    Full Text Available In recent testing of the partial discharges or the use for the diagnosis of insulation condition of high voltage generators, transformers, cables and high voltage equipment develops rapidly. It is a result of the development of electronics, as well as, the development of knowledge about the processes of partial discharges. The aim of this paper is to contribute the better understanding of this phenomenon of partial discharges by consideration of the relevant physical processes in isolation materials and isolation systems. Prebreakdown considers specific processes, and development processes at the local level and their impact on specific isolation material. This approach to the phenomenon of partial discharges needed to allow better take into account relevant discharge parameters as well as better numerical model of partial discharges.

  16. Rank-Defect Adjustment Model for Survey-Line Systematic Errors in Marine Survey Net

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In this paper,the structure of systematic and random errors in marine survey net are discussed in detail and the adjustment method for observations of marine survey net is studied,in which the rank-defect characteristic is discovered first up to now.On the basis of the survey-line systematic error model,the formulae of the rank-defect adjustment model are deduced according to modern adjustment theory.An example of calculations with really observed data is carried out to demonstrate the efficiency of this adjustment model.Moreover,it is proved that the semi-systematic error correction method used at present in marine gravimetry in China is a special case of the adjustment model presented in this paper.

  17. Adjusting multistate capture-recapture models for misclassification bias: manatee breeding proportions

    Science.gov (United States)

    Kendall, W.L.; Hines, J.E.; Nichols, J.D.

    2003-01-01

    Matrix population models are important tools for research and management of populations. Estimating the parameters of these models is an important step in applying them to real populations. Multistate capture-recapture methods have provided a useful means for estimating survival and parameters of transition between locations or life history states but have mostly relied on the assumption that the state occupied by each detected animal is known with certainty. Nevertheless, in some cases animals can be misclassified. Using multiple capture sessions within each period of interest, we developed a method that adjusts estimates of transition probabilities for bias due to misclassification. We applied this method to 10 years of sighting data for a population of Florida manatees (Trichechus manatus latirostris) in order to estimate the annual probability of transition from nonbreeding to breeding status. Some sighted females were unequivocally classified as breeders because they were clearly accompanied by a first-year calf. The remainder were classified, sometimes erroneously, as nonbreeders because an attendant first-year calf was not observed or was classified as more than one year old. We estimated a conditional breeding probability of 0.31 + 0.04 (estimate + 1 SE) when we ignored misclassification bias, and 0.61 + 0.09 when we accounted for misclassification.

  18. Microwave and infrared spectra, adjusted r0 structural parameters, conformational stabilities, vibrational assignments, and theoretical calculations of cyclobutylcarboxylic acid chloride.

    Science.gov (United States)

    Klaassen, Joshua J; Darkhalil, Ikhlas D; Deodhar, Bhushan S; Gounev, Todor K; Gurusinghe, Ranil M; Tubergen, Michael J; Groner, Peter; Durig, James R

    2013-08-01

    The FT-microwave spectrum of cyclobutylcarboxylic acid chloride, c-C4H7C(O)Cl, has been recorded and 153 transitions for the (35)Cl and (37)Cl isotopologues have been assigned for the gauche-equatorial (g-Eq) conformation. The ground state rotational constants were determined for (35)Cl [(37)Cl]: A = 4349.8429(25) [4322.0555(56)] MHz, B = 1414.8032(25) [1384.5058(25)] MHz, and C = 1148.2411(25) [1126.3546(25)] MHz. From these rotational constants and ab initio predicted parameters, adjusted r0 parameters are reported with distances (Å) rCα-C = 1.491(4), rC═O = 1.193(3), rCα-Cβ = 1.553(4), rCα-Cβ' = 1.540(4), rCγ-Cβ = 1.547(4), rCγ-Cβ' = 1.546(4), rC-Cl = 1.801(3) and angles (deg) τCγCβCβ'Cα = 30.9(5). Variable temperature (-70 to -100 °C) infrared spectra (4000 to 400 cm(-1)) were recorded in liquid xenon and the g-Eq conformer was determined the most stable form, with enthalpy differences of 91 ± 9 cm(-1) (1.09 ± 0.11 kJ/mol) for the gauche-axial (g-Ax) form and 173 ± 17 cm(-1) (2.07 ± 0.20 kJ/mol) for the trans-equatorial (t-Eq) conformer. The relative amounts at ambient temperature are 54% g-Eq, 35 ± 1% g-Ax, and 12 ± 1% t-Eq forms. Vibrational assignments have been provided for the three conformers and theoretical calculations were carried out. The results are discussed and compared to corresponding properties of related molecules.

  19. Combined Parameter and State Estimation Algorithms for Multivariable Nonlinear Systems Using MIMO Wiener Models

    Directory of Open Access Journals (Sweden)

    Houda Salhi

    2016-01-01

    Full Text Available This paper deals with the parameter estimation problem for multivariable nonlinear systems described by MIMO state-space Wiener models. Recursive parameters and state estimation algorithms are presented using the least squares technique, the adjustable model, and the Kalman filter theory. The basic idea is to estimate jointly the parameters, the state vector, and the internal variables of MIMO Wiener models based on a specific decomposition technique to extract the internal vector and avoid problems related to invertibility assumption. The effectiveness of the proposed algorithms is shown by an illustrative simulation example.

  20. Gait Parameter Adjustments for Walking on a Treadmill at Preferred, Slower, and Faster Speeds in Older Adults with Down Syndrome

    Directory of Open Access Journals (Sweden)

    Beth A. Smith

    2012-01-01

    Full Text Available The combined effects of ligamentous laxity, hypotonia, and decrements associated with aging lead to stability-enhancing foot placement adaptations during routine overground walking at a younger age in adults with Down syndrome (DS compared to their peers with typical development (TD. Our purpose here was to examine real-time adaptations in older adults with DS by testing their responses to walking on a treadmill at their preferred speed and at speeds slower and faster than preferred. We found that older adults with DS were able to adapt their gait to slower and faster than preferred treadmill speeds; however, they maintained their stability-enhancing foot placements at all speeds compared to their peers with TD. All adults adapted their gait patterns similarly in response to faster and slower than preferred treadmill-walking speeds. They increased stride frequency and stride length, maintained step width, and decreased percent stance as treadmill speed increased. Older adults with DS, however, adjusted their stride frequencies significantly less than their peers with TD. Our results show that older adults with DS have the capacity to adapt their gait parameters in response to different walking speeds while also supporting the need for intervention to increase gait stability.

  1. ASPECTS OF DESIGN PROCESS AND CAD MODELLING OF AN ADJUSTABLE CENTRIFUGAL COUPLING

    Directory of Open Access Journals (Sweden)

    Adrian BUDALĂ

    2015-05-01

    Full Text Available The paper deals with constructive and functional elements of an adjustable coupling with friction shoes and adjustable driving. Also, the paper shows few stages of the design process, some advantages of the using CAD software and some comparative results prototype vs. CAD model.

  2. Parental Support, Coping Strategies, and Psychological Adjustment: An Integrative Model with Late Adolescents.

    Science.gov (United States)

    Holahan, Charles J.; And Others

    1995-01-01

    An integrative predictive model was applied to responses of 241 college freshmen to examine interrelationships among parental support, adaptive coping strategies, and psychological adjustment. Social support from both parents and a nonconflictual parental relationship were positively associated with adolescents' psychological adjustment. (SLD)

  3. Parameter Optimisation for the Behaviour of Elastic Models over Time

    DEFF Research Database (Denmark)

    Mosegaard, Jesper

    2004-01-01

    Optimisation of parameters for elastic models is essential for comparison or finding equivalent behaviour of elastic models when parameters cannot simply be transferred or converted. This is the case with a large range of commonly used elastic models. In this paper we present a general method...... that will optimise parameters based on the behaviour of the elastic models over time....

  4. Model Identification of Linear Parameter Varying Aircraft Systems

    OpenAIRE

    Fujimore, Atsushi; Ljung, Lennart

    2007-01-01

    This article presents a parameter estimation of continuous-time polytopic models for a linear parameter varying (LPV) system. The prediction error method of linear time invariant (LTI) models is modified for polytopic models. The modified prediction error method is applied to an LPV aircraft system whose varying parameter is the flight velocity and model parameters are the stability and control derivatives (SCDs). In an identification simulation, the polytopic model is more suitable for expre...

  5. Modeling of an Adjustable Beam Solid State Light Project

    Science.gov (United States)

    Clark, Toni

    2015-01-01

    This proposal is for the development of a computational model of a prototype variable beam light source using optical modeling software, Zemax Optics Studio. The variable beam light source would be designed to generate flood, spot, and directional beam patterns, while maintaining the same average power usage. The optical model would demonstrate the possibility of such a light source and its ability to address several issues: commonality of design, human task variability, and light source design process improvements. An adaptive lighting solution that utilizes the same electronics footprint and power constraints while addressing variability of lighting needed for the range of exploration tasks can save costs and allow for the development of common avionics for lighting controls.

  6. The relationship of values to adjustment in illness: a model for nursing practice.

    Science.gov (United States)

    Harvey, R M

    1992-04-01

    This paper proposes a model of the relationship between values, in particular health value, and adjustment to illness. The importance of values as well as the need for value change are described in the literature related to adjustment to physical disability and chronic illness. An empirical model, however, that explains the relationship of values to adjustment or adaptation has not been found by this researcher. Balance theory and its application to the abstract and perceived cognitions of health value and health perception are described here to explain the relationship of values like health value to outcomes associated with adjustment or adaptation to illness. The proposed model is based on the balance theories of Heider, Festinger and Feather. Hypotheses based on the model were tested and supported in a study of 100 adults with visible and invisible chronic illness. Nursing interventions based on the model are described and suggestions for further research discussed.

  7. Mixed continuous/discrete time modelling with exact time adjustments

    NARCIS (Netherlands)

    Rovers, K.C.; Kuper, Jan; van de Burgwal, M.D.; Kokkeler, Andre B.J.; Smit, Gerardus Johannes Maria

    2011-01-01

    Many systems interact with their physical environment. Design of such systems need a modelling and simulation tool which can deal with both the continuous and discrete aspects. However, most current tools are not adequately able to do so, as they implement both continuous and discrete time signals

  8. R.M. Solow Adjusted Model of Economic Growth

    Directory of Open Access Journals (Sweden)

    Ion Gh. Rosca

    2007-05-01

    The analysis part of the model is based on the study of the equilibrium to the continuous case with some interpretations of the discreet one, by using the state diagram. The optimization problem at the economic level is also used; it is built up of a specified number of representative consumers and firms in order to reveal the interaction between these elements.

  9. [Calculation of parameters in forest evapotranspiration model].

    Science.gov (United States)

    Wang, Anzhi; Pei, Tiefan

    2003-12-01

    Forest evapotranspiration is an important component not only in water balance, but also in energy balance. It is a great demand for the development of forest hydrology and forest meteorology to simulate the forest evapotranspiration accurately, which is also a theoretical basis for the management and utilization of water resources and forest ecosystem. Taking the broadleaved Korean pine forest on Changbai Mountain as an example, this paper constructed a mechanism model for estimating forest evapotranspiration, based on the aerodynamic principle and energy balance equation. Using the data measured by the Routine Meteorological Measurement System and Open-Path Eddy Covariance Measurement System mounted on the tower in the broadleaved Korean pine forest, the parameters displacement height d, stability functions for momentum phi m, and stability functions for heat phi h were ascertained. The displacement height of the study site was equal to 17.8 m, near to the mean canopy height, and the functions of phi m and phi h changing with gradient Richarson number R i were constructed.

  10. Adjusting the specificity of an engine map based on the sensitivity of an engine control parameter relative to a performance variable

    Science.gov (United States)

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2014-10-28

    Methods and systems for engine control optimization are provided. A first and a second operating condition of a vehicle engine are detected. An initial value is identified for a first and a second engine control parameter corresponding to a combination of the detected operating conditions according to a first and a second engine map look-up table. The initial values for the engine control parameters are adjusted based on a detected engine performance variable to cause the engine performance variable to approach a target value. A first and a second sensitivity of the engine performance variable are determined in response to changes in the engine control parameters. The first engine map look-up table is adjusted when the first sensitivity is greater than a threshold, and the second engine map look-up table is adjusted when the second sensitivity is greater than a threshold.

  11. Adjustment problems and maladaptive relational style: a mediational model of sexual coercion in intimate relationships.

    Science.gov (United States)

    Salwen, Jessica K; O'Leary, K Daniel

    2013-07-01

    Four hundred and fifty-three married or cohabitating couples participated in the current study. A meditational model of men's perpetration of sexual coercion within an intimate relationship was examined based on past theories and known correlates of rape and sexual coercion. The latent constructs of adjustment problems and maladaptive relational style were examined. Adjustment problem variables included perceived stress, perceived low social support, and marital discord. Maladaptive relational style variables included psychological aggression, dominance, and jealousy. Sexual coercion was a combined measure of men's reported perpetration and women's reported victimization. As hypothesized, adjustment problems significantly predicted sexual coercion. Within the meditational model, adjustment problems were significantly correlated with maladaptive relational style, and maladaptive relational style significantly predicted sexual coercion. Once maladaptive relational style was introduced as a mediator, adjustment problems no longer significantly predicted sexual coercion. Implications for treatment, limitations, and future research are discussed.

  12. Prediction of Liquid-Liquid Equilibrium Using the Group Solubility Parameter Model

    Institute of Scientific and Technical Information of China (English)

    ZHAO Mo; CHEN Fuming

    2005-01-01

    The group solubility parameter (GSP) model was used to analyze the liquid-liquid equilibrium (LLE) of ternary and quaternary systems. The GSP parameters are divided into four dimensions representing the four major intermolecular forces. The values of the parameters were determined by regression using the nonlinear SIMPLEX optimization method to fit the LLE data of 548 ternary and 26 quaternary systems selected from the literature. LLE predictions of 8 ternary systems were then made using the fit parameters. Comparison of the results with predictions using the modified UNIFAC model shows that the GSP model has less adjustable parameters to achieve a similar accuracy and that the parameter values are easily acquired by analysis of available data.

  13. Transfer function modeling of damping mechanisms in distributed parameter models

    Science.gov (United States)

    Slater, J. C.; Inman, D. J.

    1994-01-01

    This work formulates a method for the modeling of material damping characteristics in distributed parameter models which may be easily applied to models such as rod, plate, and beam equations. The general linear boundary value vibration equation is modified to incorporate hysteresis effects represented by complex stiffness using the transfer function approach proposed by Golla and Hughes. The governing characteristic equations are decoupled through separation of variables yielding solutions similar to those of undamped classical theory, allowing solution of the steady state as well as transient response. Example problems and solutions are provided demonstrating the similarity of the solutions to those of the classical theories and transient responses of nonviscous systems.

  14. Parameter Estimation of a Plucked String Synthesis Model Using a Genetic Algorithm with Perceptual Fitness Calculation

    Directory of Open Access Journals (Sweden)

    Riionheimo Janne

    2003-01-01

    Full Text Available We describe a technique for estimating control parameters for a plucked string synthesis model using a genetic algorithm. The model has been intensively used for sound synthesis of various string instruments but the fine tuning of the parameters has been carried out with a semiautomatic method that requires some hand adjustment with human listening. An automated method for extracting the parameters from recorded tones is described in this paper. The calculation of the fitness function utilizes knowledge of the properties of human hearing.

  15. On the modeling of internal parameters in hyperelastic biological materials

    CERN Document Server

    Giantesio, Giulia

    2016-01-01

    This paper concerns the behavior of hyperelastic energies depending on an internal parameter. First, the situation in which the internal parameter is a function of the gradient of the deformation is presented. Second, two models where the parameter describes the activation of skeletal muscle tissue are analyzed. In those models, the activation parameter depends on the strain and it is important to consider the derivative of the parameter with respect to the strain in order to capture the proper behavior of the stress.

  16. Determining extreme parameter correlation in ground water models

    DEFF Research Database (Denmark)

    Hill, Mary Cole; Østerby, Ole

    2003-01-01

    In ground water flow system models with hydraulic-head observations but without significant imposed or observed flows, extreme parameter correlation generally exists. As a result, hydraulic conductivity and recharge parameters cannot be uniquely estimated. In complicated problems, such correlation...... correlation coefficients, but it required sensitivities that were one to two significant digits less accurate than those that required using parameter correlation coefficients; and (3) both the SVD and parameter correlation coefficients identified extremely correlated parameters better when the parameters...

  17. Parametric Adjustments to the Rankine Vortex Wind Model for Gulf of Mexico Hurricanes

    Science.gov (United States)

    2012-11-01

    Rankine Vortex (RV) model [25], the SLOSH model [28], the Holland model [29], the vortex simulation model [30], and the Willoughby and Rahn model [31...www.asme.org/terms/Terms_Use.cfm where Pn ¼ Pc 20:69 þ 1:33Vm þ 0:11u (3) Willoughby et al. [34] provide an alternative formula to estimate Rm as a function of...MacAfee and Pearson [26], and Willoughby et al. [34] also made adjustments which were tailored for mid- latitude applications. 3 Adjustments to the RV

  18. Model comparisons and genetic and environmental parameter ...

    African Journals Online (AJOL)

    arc

    South African Journal of Animal Science 2005, 35 (1) ... Genetic and environmental parameters were estimated for pre- and post-weaning average daily gain ..... and BWT (and medium maternal genetic correlations) indicates that these traits ...

  19. NEW DOCTORAL DEGREE Parameter estimation problem in the Weibull model

    OpenAIRE

    Marković, Darija

    2009-01-01

    In this dissertation we consider the problem of the existence of best parameters in the Weibull model, one of the most widely used statistical models in reliability theory and life data theory. Particular attention is given to a 3-parameter Weibull model. We have listed some of the many applications of this model. We have described some of the classical methods for estimating parameters of the Weibull model, two graphical methods (Weibull probability plot and hazard plot), and two analyt...

  20. On the compensation between cloud feedback and cloud adjustment in climate models

    Science.gov (United States)

    Chung, Eui-Seok; Soden, Brian J.

    2017-04-01

    Intermodel compensation between cloud feedback and rapid cloud adjustment has important implications for the range of model-inferred climate sensitivity. Although this negative intermodel correlation exists in both realistic (e.g., coupled ocean-atmosphere models) and idealized (e.g., aqua-planet) model configurations, the compensation appears to be stronger in the latter. The cause of the compensation between feedback and adjustment, and its dependence on model configuration remain poorly understood. In this study, we examine the characteristics of the cloud feedback and adjustment in model simulations with differing complexity, and analyze the causes responsible for their compensation. We show that in all model configurations, the intermodel compensation between cloud feedback and cloud adjustment largely results from offsetting changes in marine boundary-layer clouds. The greater prevalence of these cloud types in aqua-planet models is a likely contributor to the larger correlation between feedback and adjustment in those configurations. It is also shown that differing circulation changes in the aqua-planet configuration of some models act to amplify the intermodel range and sensitivity of the cloud radiative response by about a factor of 2.

  1. Parameter optimization model in electrical discharge machining process

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Electrical discharge machining (EDM) process, at present is still an experience process, wherein selected parameters are often far from the optimum, and at the same time selecting optimization parameters is costly and time consuming. In this paper,artificial neural network (ANN) and genetic algorithm (GA) are used together to establish the parameter optimization model. An ANN model which adapts Levenberg-Marquardt algorithm has been set up to represent the relationship between material removal rate (MRR) and input parameters, and GA is used to optimize parameters, so that optimization results are obtained. The model is shown to be effective, and MRR is improved using optimized machining parameters.

  2. 论生产函数调整模型%Study on adjustable production function model

    Institute of Scientific and Technical Information of China (English)

    葛新权

    2003-01-01

    Cobb-Douglas production function is a nonlinear model which is most frequently used and can beenchanged into linear model. There is no doubt for the reasonability of this logarithm linearization. Thispaper gives an new proposition on the basis of deeper analysis that there is a defect with this linearization,hence adjustable production function model is proposed to eliminate it.

  3. Sensitivity of a Shallow-Water Model to Parameters

    CERN Document Server

    Kazantsev, Eugene

    2011-01-01

    An adjoint based technique is applied to a shallow water model in order to estimate the influence of the model's parameters on the solution. Among parameters the bottom topography, initial conditions, boundary conditions on rigid boundaries, viscosity coefficients Coriolis parameter and the amplitude of the wind stress tension are considered. Their influence is analyzed from three points of view: 1. flexibility of the model with respect to a parameter that is related to the lowest value of the cost function that can be obtained in the data assimilation experiment that controls this parameter; 2. possibility to improve the model by the parameter's control, i.e. whether the solution with the optimal parameter remains close to observations after the end of control; 3. sensitivity of the model solution to the parameter in a classical sense. That implies the analysis of the sensitivity estimates and their comparison with each other and with the local Lyapunov exponents that characterize the sensitivity of the mode...

  4. Modeling and Control of the Redundant Parallel Adjustment Mechanism on a Deployable Antenna Panel.

    Science.gov (United States)

    Tian, Lili; Bao, Hong; Wang, Meng; Duan, Xuechao

    2016-10-01

    With the aim of developing multiple input and multiple output (MIMO) coupling systems with a redundant parallel adjustment mechanism on the deployable antenna panel, a structural control integrated design methodology is proposed in this paper. Firstly, the modal information from the finite element model of the structure of the antenna panel is extracted, and then the mathematical model is established with the Hamilton principle; Secondly, the discrete Linear Quadratic Regulator (LQR) controller is added to the model in order to control the actuators and adjust the shape of the panel. Finally, the engineering practicality of the modeling and control method based on finite element analysis simulation is verified.

  5. Modeling and Control of the Redundant Parallel Adjustment Mechanism on a Deployable Antenna Panel

    Directory of Open Access Journals (Sweden)

    Lili Tian

    2016-10-01

    Full Text Available With the aim of developing multiple input and multiple output (MIMO coupling systems with a redundant parallel adjustment mechanism on the deployable antenna panel, a structural control integrated design methodology is proposed in this paper. Firstly, the modal information from the finite element model of the structure of the antenna panel is extracted, and then the mathematical model is established with the Hamilton principle; Secondly, the discrete Linear Quadratic Regulator (LQR controller is added to the model in order to control the actuators and adjust the shape of the panel. Finally, the engineering practicality of the modeling and control method based on finite element analysis simulation is verified.

  6. Steps in the construction and verification of an explanatory model of psychosocial adjustment

    Directory of Open Access Journals (Sweden)

    Arantzazu Rodríguez-Fernández

    2016-06-01

    Full Text Available The aim of the present study was to empirically test an explanatory model of psychosocial adjustment during adolescence, with psychosocial adjustment during this stage being understood as a combination of school adjustment (or school engagement and subjective well-being. According to the hypothetic model, psychosocial adjustment depends on self-concept and resilience, which in turn act as mediators of the influence of perceived social support (from family, peers and teachers on this adjustment. Participants were 1250 secondary school students (638 girls and 612 boys aged between 12 and 15 years (Mean = 13.72; SD = 1.09. The results provided evidence of: (a the influence of all three types of perceived support on subject resilience and self-concept, with perceived family support being particularly important in this respect; (b the influence of the support received from teachers on school adjustment and support received from the family on psychological wellbeing; and (c the absence of any direct influence of peer support on psychosocial adjustment, although indirect influence was observed through the psychological variables studied. These results are discussed from an educational perspective and in terms of future research

  7. Contact Angle Adjustment in Equation of States Based Pseudo-Potential Model

    CERN Document Server

    Hu, Anjie; Uddin, Rizwan

    2015-01-01

    Single component pseudo-potential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many research, it has been claimed that this model can be stable for density ratios larger than 1000, however, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in present work show that, by applying the contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with the new...

  8. Estimation of shape model parameters for 3D surfaces

    DEFF Research Database (Denmark)

    Erbou, Søren Gylling Hemmingsen; Darkner, Sune; Fripp, Jurgen;

    2008-01-01

    Statistical shape models are widely used as a compact way of representing shape variation. Fitting a shape model to unseen data enables characterizing the data in terms of the model parameters. In this paper a Gauss-Newton optimization scheme is proposed to estimate shape model parameters of 3D s...

  9. Adjustment model of thermoluminescence experimental data; Modelo de ajuste de datos experimentales de termoluminiscencia

    Energy Technology Data Exchange (ETDEWEB)

    Moreno y Moreno, A. [Departamento de Apoyo en Ciencias Aplicadas, Benemerita Universidad Autonoma de Puebla, 4 Sur 104, Centro Historico, 72000 Puebla (Mexico); Moreno B, A. [Facultad de Ciencias Quimicas, UNAM, 04510 Mexico D.F. (Mexico)

    2002-07-01

    This model adjusts the experimental results for thermoluminescence according to the equation: I (T) = I (a{sub i}* exp (-1/b{sub i} * (T-C{sub i})) where: a{sub i}, b{sub i}, c{sub i} are the i-Th peak adjusted to a gaussian curve. The adjustments of the curve can be operated manual or analytically using the macro function and the solver.xla complement installed previously in the computational system. In this work it is shown: 1. The information of experimental data from a LiF curve obtained from the Physics Institute of UNAM which the data adjustment model is operated in the macro type. 2. A LiF curve of four peaks obtained from Harshaw information simulated in Microsoft Excel, discussed in previous works, as a reference not in macro. (Author)

  10. Re-estimating temperature-dependent consumption parameters in bioenergetics models for juvenile Chinook salmon

    Science.gov (United States)

    Plumb, John M.; Moffitt, Christine M.

    2015-01-01

    Researchers have cautioned against the borrowing of consumption and growth parameters from other species and life stages in bioenergetics growth models. In particular, the function that dictates temperature dependence in maximum consumption (Cmax) within the Wisconsin bioenergetics model for Chinook Salmon Oncorhynchus tshawytscha produces estimates that are lower than those measured in published laboratory feeding trials. We used published and unpublished data from laboratory feeding trials with subyearling Chinook Salmon from three stocks (Snake, Nechako, and Big Qualicum rivers) to estimate and adjust the model parameters for temperature dependence in Cmax. The data included growth measures in fish ranging from 1.5 to 7.2 g that were held at temperatures from 14°C to 26°C. Parameters for temperature dependence in Cmax were estimated based on relative differences in food consumption, and bootstrapping techniques were then used to estimate the error about the parameters. We found that at temperatures between 17°C and 25°C, the current parameter values did not match the observed data, indicating that Cmax should be shifted by about 4°C relative to the current implementation under the bioenergetics model. We conclude that the adjusted parameters for Cmax should produce more accurate predictions from the bioenergetics model for subyearling Chinook Salmon.

  11. Multi-objective global sensitivity analysis of the WRF model parameters

    Science.gov (United States)

    Quan, Jiping; Di, Zhenhua; Duan, Qingyun; Gong, Wei; Wang, Chen

    2015-04-01

    Tuning model parameters to match model simulations with observations can be an effective way to enhance the performance of numerical weather prediction (NWP) models such as Weather Research and Forecasting (WRF) model. However, this is a very complicated process as a typical NWP model involves many model parameters and many output variables. One must take a multi-objective approach to ensure all of the major simulated model outputs are satisfactory. This talk presents the results of an investigation of multi-objective parameter sensitivity analysis of the WRF model to different model outputs, including conventional surface meteorological variables such as precipitation, surface temperature, humidity and wind speed, as well as atmospheric variables such as total precipitable water, cloud cover, boundary layer height and outgoing long radiation at the top of the atmosphere. The goal of this study is to identify the most important parameters that affect the predictive skill of short-range meteorological forecasts by the WRF model. The study was performed over the Greater Beijing Region of China. A total of 23 adjustable parameters from seven different physical parameterization schemes were considered. Using a multi-objective global sensitivity analysis method, we examined the WRF model parameter sensitivities to the 5-day simulations of the aforementioned model outputs. The results show that parameter sensitivities vary with different model outputs. But three to four of the parameters are shown to be sensitive to all model outputs considered. The sensitivity results from this research can be the basis for future model parameter optimization of the WRF model.

  12. On the hydrologic adjustment of climate-model projections: The potential pitfall of potential evapotranspiration

    Science.gov (United States)

    Milly, P.C.D.; Dunne, K.A.

    2011-01-01

    Hydrologic models often are applied to adjust projections of hydroclimatic change that come from climate models. Such adjustment includes climate-bias correction, spatial refinement ("downscaling"), and consideration of the roles of hydrologic processes that were neglected in the climate model. Described herein is a quantitative analysis of the effects of hydrologic adjustment on the projections of runoff change associated with projected twenty-first-century climate change. In a case study including three climate models and 10 river basins in the contiguous United States, the authors find that relative (i.e., fractional or percentage) runoff change computed with hydrologic adjustment more often than not was less positive (or, equivalently, more negative) than what was projected by the climate models. The dominant contributor to this decrease in runoff was a ubiquitous change in runoff (median 211%) caused by the hydrologic model's apparent amplification of the climate-model-implied growth in potential evapotranspiration. Analysis suggests that the hydrologic model, on the basis of the empirical, temperature-based modified Jensen-Haise formula, calculates a change in potential evapotranspiration that is typically 3 times the change implied by the climate models, which explicitly track surface energy budgets. In comparison with the amplification of potential evapotranspiration, central tendencies of other contributions from hydrologic adjustment (spatial refinement, climate-bias adjustment, and process refinement) were relatively small. The authors' findings highlight the need for caution when projecting changes in potential evapotranspiration for use in hydrologic models or drought indices to evaluate climatechange impacts on water. Copyright ?? 2011, Paper 15-001; 35,952 words, 3 Figures, 0 Animations, 1 Tables.

  13. Compositional modelling of distributed-parameter systems

    NARCIS (Netherlands)

    Maschke, Bernhard; Schaft, van der Arjan; Lamnabhi-Lagarrigue, F.; Loría, A.; Panteley, E.

    2005-01-01

    The Hamiltonian formulation of distributed-parameter systems has been a challenging reserach area for quite some time. (A nice introduction, especially with respect to systems stemming from fluid dynamics, can be found in [26], where also a historical account is provided.) The identification of the

  14. Parameter Estimation and Experimental Design in Groundwater Modeling

    Institute of Scientific and Technical Information of China (English)

    SUN Ne-zheng

    2004-01-01

    This paper reviews the latest developments on parameter estimation and experimental design in the field of groundwater modeling. Special considerations are given when the structure of the identified parameter is complex and unknown. A new methodology for constructing useful groundwater models is described, which is based on the quantitative relationships among the complexity of model structure, the identifiability of parameter, the sufficiency of data, and the reliability of model application.

  15. Bayesian approach to decompression sickness model parameter estimation.

    Science.gov (United States)

    Howle, L E; Weber, P W; Nichols, J M

    2017-03-01

    We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters.

  16. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research

    Directory of Open Access Journals (Sweden)

    Miguel Angel Luque-Fernandez

    2016-10-01

    Full Text Available Abstract Background In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean. Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. Methods We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. Results All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001. However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3 for non-flexible piecewise exponential models. Conclusion We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  17. The combined geodetic network adjusted on the reference ellipsoid – a comparison of three functional models for GNSS observations

    Directory of Open Access Journals (Sweden)

    Kadaj Roman

    2016-12-01

    Full Text Available The adjustment problem of the so-called combined (hybrid, integrated network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients. While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional

  18. The combined geodetic network adjusted on the reference ellipsoid - a comparison of three functional models for GNSS observations

    Science.gov (United States)

    Kadaj, Roman

    2016-12-01

    The adjustment problem of the so-called combined (hybrid, integrated) network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length) on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients). While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional model of the GNSS

  19. Evaluation of the Stress Adjustment and Adaptation Model among Families Reporting Economic Pressure

    Science.gov (United States)

    Vandsburger, Etty; Biggerstaff, Marilyn A.

    2004-01-01

    This research evaluates the Stress Adjustment and Adaptation Model (double ABCX model) examining the effects resiliency resources on family functioning when families experience economic pressure. Families (N = 128) with incomes at or below the poverty line from a rural area of a southern state completed measures of perceived economic pressure,…

  20. A Model of Divorce Adjustment for Use in Family Service Agencies.

    Science.gov (United States)

    Faust, Ruth Griffith

    1987-01-01

    Presents a combined educationally and therapeutically oriented model of treatment to (1) control and lessen disruptive experiences associated with divorce; (2) enable individuals to improve their skill in coping with adjustment reactions to divorce; and (3) modify the pressures and response of single parenthood. Describes the model's four-session…

  1. Modeling Quality-Adjusted Life Expectancy Loss Resulting from Tobacco Use in the United States

    Science.gov (United States)

    Kaplan, Robert M.; Anderson, John P.; Kaplan, Cameron M.

    2007-01-01

    Purpose: To describe the development of a model for estimating the effects of tobacco use upon Quality Adjusted Life Years (QALYs) and to estimate the impact of tobacco use on health outcomes for the United States (US) population using the model. Method: We obtained estimates of tobacco consumption from 6 years of the National Health Interview…

  2. Parameter and Uncertainty Estimation in Groundwater Modelling

    DEFF Research Database (Denmark)

    Jensen, Jacob Birk

    The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... and uncertainty estimation. Essential issues relating to calibration are discussed. The classical regression methods are described; however, the main focus is on the Generalized Likelihood Uncertainty Estimation (GLUE) methodology. The next two chapters describe case studies in which the GLUE methodology...

  3. Nonlinear adaptive synchronization rule for identification of a large amount of parameters in dynamical models

    Energy Technology Data Exchange (ETDEWEB)

    Ma Huanfei [Center for Computational Systems Biology, Fudan University, Shanghai 200433 (China)] [School of Computer Science, Fudan University, Shanghai 200433 (China); Lin Wei, E-mail: wlin@fudan.edu.c [Center for Computational Systems Biology, Fudan University, Shanghai 200433 (China)] [School of Mathematical Sciences, Fudan University, Shanghai 200433 (China)] [Key Laboratory of Mathematics for Nonlinear Sciences (Fudan University), Ministry of Education (China)] [CAS-MPG Partner Institute for Computational Biology, Chinese Academy of Sciences, Shanghai 200031 (China)

    2009-12-28

    The existing adaptive synchronization technique based on the stability theory and invariance principle of dynamical systems, though theoretically proved to be valid for parameters identification in specific models, is always showing slow convergence rate and even failed in practice when the number of parameters becomes large. Here, for parameters update, a novel nonlinear adaptive rule is proposed to accelerate the rate. Its feasibility is validated by analytical arguments as well as by specific parameters identification in the Lotka-Volterra model with multiple species. Two adjustable factors in this rule influence the identification accuracy, which means that a proper choice of these factors leads to an optimal performance of this rule. In addition, a feasible method for avoiding the occurrence of the approximate linear dependence among terms with parameters on the synchronized manifold is also proposed.

  4. Parameter redundancy in discrete state‐space and integrated models

    Science.gov (United States)

    McCrea, Rachel S.

    2016-01-01

    Discrete state‐space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state‐space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state‐space models using discrete analogues of methods for continuous state‐space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. PMID:27362826

  5. Parameter redundancy in discrete state-space and integrated models.

    Science.gov (United States)

    Cole, Diana J; McCrea, Rachel S

    2016-09-01

    Discrete state-space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state-space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state-space models using discrete analogues of methods for continuous state-space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Assessment and indirect adjustment for confounding by smoking in cohort studies using relative hazards models.

    Science.gov (United States)

    Richardson, David B; Laurier, Dominique; Schubauer-Berigan, Mary K; Tchetgen Tchetgen, Eric; Cole, Stephen R

    2014-11-01

    Workers' smoking histories are not measured in many occupational cohort studies. Here we discuss the use of negative control outcomes to detect and adjust for confounding in analyses that lack information on smoking. We clarify the assumptions necessary to detect confounding by smoking and the additional assumptions necessary to indirectly adjust for such bias. We illustrate these methods using data from 2 studies of radiation and lung cancer: the Colorado Plateau cohort study (1950-2005) of underground uranium miners (in which smoking was measured) and a French cohort study (1950-2004) of nuclear industry workers (in which smoking was unmeasured). A cause-specific relative hazards model is proposed for estimation of indirectly adjusted associations. Among the miners, the proposed method suggests no confounding by smoking of the association between radon and lung cancer--a conclusion supported by adjustment for measured smoking. Among the nuclear workers, the proposed method suggests substantial confounding by smoking of the association between radiation and lung cancer. Indirect adjustment for confounding by smoking resulted in an 18% decrease in the adjusted estimated hazard ratio, yet this cannot be verified because smoking was unmeasured. Assumptions underlying this method are described, and a cause-specific proportional hazards model that allows easy implementation using standard software is presented.

  7. Emotional closeness to parents and grandparents: A moderated mediation model predicting adolescent adjustment.

    Science.gov (United States)

    Attar-Schwartz, Shalhevet

    2015-09-01

    Warm and emotionally close relationships with parents and grandparents have been found in previous studies to be linked with better adolescent adjustment. The present study, informed by Family Systems Theory and Intergenerational Solidarity Theory, uses a moderated mediation model analyzing the contribution of the dynamics of these intergenerational relationships to adolescent adjustment. Specifically, it examines the mediating role of emotional closeness to the closest grandparent in the relationship between emotional closeness to a parent (the offspring of the closest grandparent) and adolescent adjustment difficulties. The model also examines the moderating role of emotional closeness to parents in the relationship between emotional closeness to grandparents and adjustment difficulties. The study was based on a sample of 1,405 Jewish Israeli secondary school students (ages 12-18) who completed a structured questionnaire. It was found that emotional closeness to the closest grandparent was more strongly associated with reduced adjustment difficulties among adolescents with higher levels of emotional closeness to their parents. In addition, adolescent adjustment and emotional closeness to parents was partially mediated by emotional closeness to grandparents. Examining the family conditions under which adolescents' relationships with grandparents is stronger and more beneficial for them can help elucidate variations in grandparent-grandchild ties and expand our understanding of the mechanisms that shape child outcomes.

  8. An automatic and effective parameter optimization method for model tuning

    Directory of Open Access Journals (Sweden)

    T. Zhang

    2015-11-01

    simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  9. Ternary interaction parameters in calphad solution models

    Energy Technology Data Exchange (ETDEWEB)

    Eleno, Luiz T.F., E-mail: luizeleno@usp.br [Universidade de Sao Paulo (USP), SP (Brazil). Instituto de Fisica; Schön, Claudio G., E-mail: schoen@usp.br [Universidade de Sao Paulo (USP), SP (Brazil). Computational Materials Science Laboratory. Department of Metallurgical and Materials Engineering

    2014-07-01

    For random, diluted, multicomponent solutions, the excess chemical potentials can be expanded in power series of the composition, with coefficients that are pressure- and temperature-dependent. For a binary system, this approach is equivalent to using polynomial truncated expansions, such as the Redlich-Kister series for describing integral thermodynamic quantities. For ternary systems, an equivalent expansion of the excess chemical potentials clearly justifies the inclusion of ternary interaction parameters, which arise naturally in the form of correction terms in higher-order power expansions. To demonstrate this, we carry out truncated polynomial expansions of the excess chemical potential up to the sixth power of the composition variables. (author)

  10. Conference Innovations in Derivatives Market : Fixed Income Modeling, Valuation Adjustments, Risk Management, and Regulation

    CERN Document Server

    Grbac, Zorana; Scherer, Matthias; Zagst, Rudi

    2016-01-01

    This book presents 20 peer-reviewed chapters on current aspects of derivatives markets and derivative pricing. The contributions, written by leading researchers in the field as well as experienced authors from the financial industry, present the state of the art in: • Modeling counterparty credit risk: credit valuation adjustment, debit valuation adjustment, funding valuation adjustment, and wrong way risk. • Pricing and hedging in fixed-income markets and multi-curve interest-rate modeling. • Recent developments concerning contingent convertible bonds, the measuring of basis spreads, and the modeling of implied correlations. The recent financial crisis has cast tremendous doubts on the classical view on derivative pricing. Now, counterparty credit risk and liquidity issues are integral aspects of a prudent valuation procedure and the reference interest rates are represented by a multitude of curves according to their different periods and maturities. A panel discussion included in the book (featuring D...

  11. Parameter estimation and error analysis in environmental modeling and computation

    Science.gov (United States)

    Kalmaz, E. E.

    1986-01-01

    A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.

  12. Performance and Probabilistic Verification of Regional Parameter Estimates for Conceptual Rainfall-runoff Models

    Science.gov (United States)

    Franz, K.; Hogue, T.; Barco, J.

    2007-12-01

    Identification of appropriate parameter sets for simulation of streamflow in ungauged basins has become a significant challenge for both operational and research hydrologists. This is especially difficult in the case of conceptual models, when model parameters typically must be "calibrated" or adjusted to match streamflow conditions in specific systems (i.e. some of the parameters are not directly observable). This paper addresses the performance and uncertainty associated with transferring conceptual rainfall-runoff model parameters between basins within large-scale ecoregions. We use the National Weather Service's (NWS) operational hydrologic model, the SACramento Soil Moisture Accounting (SAC-SMA) model. A Multi-Step Automatic Calibration Scheme (MACS), using the Shuffle Complex Evolution (SCE), is used to optimize SAC-SMA parameters for a group of watersheds with extensive hydrologic records from the Model Parameter Estimation Experiment (MOPEX) database. We then explore "hydroclimatic" relationships between basins to facilitate regionalization of parameters for an established ecoregion in the southeastern United States. The impact of regionalized parameters is evaluated via standard model performance statistics as well as through generation of hindcasts and probabilistic verification procedures to evaluate streamflow forecast skill. Preliminary results show climatology ("climate neighbor") to be a better indicator of transferability than physical similarities or proximity ("nearest neighbor"). The mean and median of all the parameters within the ecoregion are the poorest choice for the ungauged basin. The choice of regionalized parameter set affected the skill of the ensemble streamflow hindcasts, however, all parameter sets show little skill in forecasts after five weeks (i.e. climatology is as good an indicator of future streamflows). In addition, the optimum parameter set changed seasonally, with the "nearest neighbor" showing the highest skill in the

  13. Adjusting a cancer mortality-prediction model for disease status-related eligibility criteria

    Directory of Open Access Journals (Sweden)

    Kimmel Marek

    2011-05-01

    Full Text Available Abstract Background Volunteering participants in disease studies tend to be healthier than the general population partially due to specific enrollment criteria. Using modeling to accurately predict outcomes of cohort studies enrolling volunteers requires adjusting for the bias introduced in this way. Here we propose a new method to account for the effect of a specific form of healthy volunteer bias resulting from imposing disease status-related eligibility criteria, on disease-specific mortality, by explicitly modeling the length of the time interval between the moment when the subject becomes ineligible for the study, and the outcome. Methods Using survival time data from 1190 newly diagnosed lung cancer patients at MD Anderson Cancer Center, we model the time from clinical lung cancer diagnosis to death using an exponential distribution to approximate the length of this interval for a study where lung cancer death serves as the outcome. Incorporating this interval into our previously developed lung cancer risk model, we adjust for the effect of disease status-related eligibility criteria in predicting the number of lung cancer deaths in the control arm of CARET. The effect of the adjustment using the MD Anderson-derived approximation is compared to that based on SEER data. Results Using the adjustment developed in conjunction with our existing lung cancer model, we are able to accurately predict the number of lung cancer deaths observed in the control arm of CARET. Conclusions The resulting adjustment was accurate in predicting the lower rates of disease observed in the early years while still maintaining reasonable prediction ability in the later years of the trial. This method could be used to adjust for, or predict the duration and relative effect of any possible biases related to disease-specific eligibility criteria in modeling studies of volunteer-based cohorts.

  14. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models.

    Directory of Open Access Journals (Sweden)

    Jonathan R Karr

    2015-05-01

    Full Text Available Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.

  15. Parameter estimation of hydrologic models using data assimilation

    Science.gov (United States)

    Kaheil, Y. H.

    2005-12-01

    The uncertainties associated with the modeling of hydrologic systems sometimes demand that data should be incorporated in an on-line fashion in order to understand the behavior of the system. This paper represents a Bayesian strategy to estimate parameters for hydrologic models in an iterative mode. The paper presents a modified technique called localized Bayesian recursive estimation (LoBaRE) that efficiently identifies the optimum parameter region, avoiding convergence to a single best parameter set. The LoBaRE methodology is tested for parameter estimation for two different types of models: a support vector machine (SVM) model for predicting soil moisture, and the Sacramento Soil Moisture Accounting (SAC-SMA) model for estimating streamflow. The SAC-SMA model has 13 parameters that must be determined. The SVM model has three parameters. Bayesian inference is used to estimate the best parameter set in an iterative fashion. This is done by narrowing the sampling space by imposing uncertainty bounds on the posterior best parameter set and/or updating the "parent" bounds based on their fitness. The new approach results in fast convergence towards the optimal parameter set using minimum training/calibration data and evaluation of fewer parameter sets. The efficacy of the localized methodology is also compared with the previously used Bayesian recursive estimation (BaRE) algorithm.

  16. Parameter identification of ZnO surge arrester models based on genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Bayadi, Abdelhafid [Laboratoire d' Automatique de Setif, Departement d' Electrotechnique, Faculte des Sciences de l' Ingenieur, Universite Ferhat ABBAS de Setif, Route de Bejaia Setif 19000 (Algeria)

    2008-07-15

    The correct and adequate modelling of ZnO surge arresters characteristics is very important for insulation coordination studies and systems reliability. In this context many researchers addressed considerable efforts to the development of surge arresters models to reproduce the dynamic characteristics observed in their behaviour when subjected to fast front impulse currents. The difficulties with these models reside essentially in the calculation and the adjustment of their parameters. This paper proposes a new technique based on genetic algorithm to obtain the best possible series of parameter values of ZnO surge arresters models. The validity of the predicted parameters is then checked by comparing the predicted results with the experimental results available in the literature. Using the ATP-EMTP package, an application of the arrester model on network system studies is presented and discussed. (author)

  17. GIS-Based Hydrogeological-Parameter Modeling

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A regression model is proposed to relate the variation of water well depth with topographic properties (area and slope), the variation of hydraulic conductivity and vertical decay factor. The implementation of this model in GIS environment (ARC/TNFO) based on known water data and DEM is used to estimate the variation of hydraulic conductivity and decay factor of different lithoiogy units in watershed context.

  18. Return predictability and intertemporal asset allocation: Evidence from a bias-adjusted VAR model

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard

    We extend the VAR based intertemporal asset allocation approach from Campbell et al. (2003) to the case where the VAR parameter estimates are adjusted for small- sample bias. We apply the analytical bias formula from Pope (1990) using both Campbell et al.'s dataset, and an extended dataset...... with quarterly data from 1952 to 2006. The results show that correcting the VAR parameters for small-sample bias has both quantitatively and qualitatively important e¤ects on the strategic intertemporal part of optimal portfolio choice, especially for bonds: for intermediate values of risk...

  19. Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model

    DEFF Research Database (Denmark)

    Åberg, Andreas; Widd, Anders; Abildskov, Jens;

    2016-01-01

    A challenge during the development of models for simulation of the automotive Selective Catalytic Reduction catalyst is the parameter estimation of the kinetic parameters, which can be time consuming and problematic. The parameter estimation is often carried out on small-scale reactor tests, or p...

  20. Mirror symmetry for two parameter models, 2

    CERN Document Server

    Candelas, Philip; Katz, S; Morrison, Douglas Robert Ogston; Philip Candelas; Anamaria Font; Sheldon Katz; David R Morrison

    1994-01-01

    We describe in detail the space of the two K\\"ahler parameters of the Calabi--Yau manifold \\P_4^{(1,1,1,6,9)}[18] by exploiting mirror symmetry. The large complex structure limit of the mirror, which corresponds to the classical large radius limit, is found by studying the monodromy of the periods about the discriminant locus, the boundary of the moduli space corresponding to singular Calabi--Yau manifolds. A symplectic basis of periods is found and the action of the Sp(6,\\Z) generators of the modular group is determined. From the mirror map we compute the instanton expansion of the Yukawa couplings and the generalized N=2 index, arriving at the numbers of instantons of genus zero and genus one of each degree. We also investigate an SL(2,\\Z) symmetry that acts on a boundary of the moduli space.

  1. Contact angle adjustment in equation-of-state-based pseudopotential model.

    Science.gov (United States)

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.

  2. Accuracy of Parameter Estimation in Gibbs Sampling under the Two-Parameter Logistic Model.

    Science.gov (United States)

    Kim, Seock-Ho; Cohen, Allan S.

    The accuracy of Gibbs sampling, a Markov chain Monte Carlo procedure, was considered for estimation of item and ability parameters under the two-parameter logistic model. Memory test data were analyzed to illustrate the Gibbs sampling procedure. Simulated data sets were analyzed using Gibbs sampling and the marginal Bayesian method. The marginal…

  3. On linear models and parameter identifiability in experimental biological systems.

    Science.gov (United States)

    Lamberton, Timothy O; Condon, Nicholas D; Stow, Jennifer L; Hamilton, Nicholas A

    2014-10-07

    A key problem in the biological sciences is to be able to reliably estimate model parameters from experimental data. This is the well-known problem of parameter identifiability. Here, methods are developed for biologists and other modelers to design optimal experiments to ensure parameter identifiability at a structural level. The main results of the paper are to provide a general methodology for extracting parameters of linear models from an experimentally measured scalar function - the transfer function - and a framework for the identifiability analysis of complex model structures using linked models. Linked models are composed by letting the output of one model become the input to another model which is then experimentally measured. The linked model framework is shown to be applicable to designing experiments to identify the measured sub-model and recover the input from the unmeasured sub-model, even in cases that the unmeasured sub-model is not identifiable. Applications for a set of common model features are demonstrated, and the results combined in an example application to a real-world experimental system. These applications emphasize the insight into answering "where to measure" and "which experimental scheme" questions provided by both the parameter extraction methodology and the linked model framework. The aim is to demonstrate the tools' usefulness in guiding experimental design to maximize parameter information obtained, based on the model structure.

  4. Criteria for Selecting and Adjusting Ground-Motion Models for Specific Target Regions: Application to Central Europe and Rock Sites

    Science.gov (United States)

    Cotton, Fabrice; Scherbaum, Frank; Bommer, Julian J.; Bungum, Hilmar

    2006-04-01

    A vital component of any seismic hazard analysis is a model for predicting the expected distribution of ground motions at a site due to possible earthquake scenarios. The limited nature of the datasets from which such models are derived gives rise to epistemic uncertainty in both the median estimates and the associated aleatory variability of these predictive equations. In order to capture this epistemic uncertainty in a seismic hazard analysis, more than one ground-motion prediction equation must be used, and the tool that is currently employed to combine multiple models is the logic tree. Candidate ground-motion models for a logic tree should be selected in order to obtain the smallest possible suite of equations that can capture the expected range of possible ground motions in the target region. This is achieved by starting from a comprehensive list of available equations and then applying criteria for rejecting those considered inappropriate in terms of quality, derivation or applicability. Once the final list of candidate models is established, adjustments must be applied to achieve parameter compatibility. Additional adjustments can also be applied to remove the effect of systematic differences between host and target regions. These procedures are applied to select and adjust ground-motion models for the analysis of seismic hazard at rock sites in West Central Europe. This region is chosen for illustrative purposes particularly because it highlights the issue of using ground-motion models derived from small magnitude earthquakes in the analysis of hazard due to much larger events. Some of the pitfalls of extrapolating ground-motion models from small to large magnitude earthquakes in low seismicity regions are discussed for the selected target region.

  5. Executive function and psychosocial adjustment in healthy children and adolescents: A latent variable modelling investigation.

    Science.gov (United States)

    Cassidy, Adam R

    2016-01-01

    The objective of this study was to establish latent executive function (EF) and psychosocial adjustment factor structure, to examine associations between EF and psychosocial adjustment, and to explore potential development differences in EF-psychosocial adjustment associations in healthy children and adolescents. Using data from the multisite National Institutes of Health (NIH) magnetic resonance imaging (MRI) Study of Normal Brain Development, the current investigation examined latent associations between theoretically and empirically derived EF factors and emotional and behavioral adjustment measures in a large, nationally representative sample of children and adolescents (7-18 years old; N = 352). Confirmatory factor analysis (CFA) was the primary method of data analysis. CFA results revealed that, in the whole sample, the proposed five-factor model (Working Memory, Shifting, Verbal Fluency, Externalizing, and Internalizing) provided a close fit to the data, χ(2)(66) = 114.48, p psychosocial adjustment associations. Findings indicate that childhood EF skills are best conceptualized as a constellation of interconnected yet distinguishable cognitive self-regulatory skills. Individual differences in certain domains of EF track meaningfully and in expected directions with emotional and behavioral adjustment indices. Externalizing behaviors, in particular, are associated with latent Working Memory and Verbal Fluency factors.

  6. An improved state-parameter analysis of ecosystem models using data assimilation

    Science.gov (United States)

    Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.

    2008-01-01

    Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the

  7. CHAMP: Changepoint Detection Using Approximate Model Parameters

    Science.gov (United States)

    2014-06-01

    positions as a Markov chain in which the transition probabilities are defined by the time since the last changepoint: p(τi+1 = t|τi = s) = g(t− s), (1...experimentally verified using artifi- cially generated data and are compared to those of Fearnhead and Liu [5]. 2 Related work Hidden Markov Models (HMMs) are...length α, and maximum number of particles M . Output: Viterbi path of changepoint times and models // Initialize data structures 1: max path, prev queue

  8. WINKLER'S SINGLE-PARAMETER SUBGRADE MODEL FROM ...

    African Journals Online (AJOL)

    Preferred Customer

    [3, 9]. However, mainly due to the simplicity of Winkler's model in practical applications and .... this case, the coefficient B takes the dimension of a ... In plane-strain problems, the assumption of ... loaded circular region; s is the radial coordinate.

  9. Modified rapid shallow breathing index adjusted with anthropometric parameters increases predictive power for extubation failure compared with the unmodified index in postcardiac surgery patients.

    Science.gov (United States)

    Takaki, Shunsuke; Kadiman, Suhaini Bin; Tahir, Sharifah Suraya; Ariff, M Hassan; Kurahashi, Kiyoyasu; Goto, Takahisa

    2015-02-01

    The aim of this study was to determine the best predictors of successful extubation after cardiac surgery, by modifying the rapid shallow breathing index (RSBI) based on patients' anthropometric parameters. Single-center prospective observational study. Two general intensive care units at a single research institute. Patients who had undergone uncomplicated cardiac surgery. None. The following parameters were investigated in conjunction with modification of the RSBI: Actual body weight (ABW), predicted body weight, ideal body weight, body mass index (BMI), and body surface area. Using the first set of patient data, RSBI threshold and modified RSBI for extubation failure were determined (threshold value; RSBI: 77 breaths/min (bpm)/L, RSBI adjusted with ABW: 5.0 bpm×kg/mL, RSBI adjusted with BMI: 2.0 bpm×BMI/mL). These threshold values for RSBI and RSBI adjusted with ABW or BMI were validated using the second set of patient data. Sensitivity values for RSBI, RSBI modified with ABW, and RSBI modified with BMI were 91%, 100%, and 100%, respectively. The corresponding specificity values were 89%, 92%, and 93%, and the corresponding receiver operator characteristic values were 0.951, 0.977, and 0.980, respectively. Modified RSBI adjusted based on ABW or BMI has greater predictive power than conventional RSBI. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Variational assimilation of streamflow into operational distributed hydrologic models: effect of spatiotemporal adjustment scale

    Directory of Open Access Journals (Sweden)

    H. Lee

    2012-01-01

    Full Text Available State updating of distributed rainfall-runoff models via streamflow assimilation is subject to overfitting because large dimensionality of the state space of the model may render the assimilation problem seriously under-determined. To examine the issue in the context of operational hydrology, we carry out a set of real-world experiments in which streamflow data is assimilated into gridded Sacramento Soil Moisture Accounting (SAC-SMA and kinematic-wave routing models of the US National Weather Service (NWS Research Distributed Hydrologic Model (RDHM with the variational data assimilation technique. Study basins include four basins in Oklahoma and five basins in Texas. To assess the sensitivity of data assimilation performance to dimensionality reduction in the control vector, we used nine different spatiotemporal adjustment scales, where state variables are adjusted in a lumped, semi-distributed, or distributed fashion and biases in precipitation and potential evaporation (PE are adjusted hourly, 6-hourly, or kept time-invariant. For each adjustment scale, three different streamflow assimilation scenarios are explored, where streamflow observations at basin interior points, at the basin outlet, or at both interior points and the outlet are assimilated. The streamflow assimilation experiments with nine different basins show that the optimum spatiotemporal adjustment scale varies from one basin to another and may be different for streamflow analysis and prediction in all of the three streamflow assimilation scenarios. The most preferred adjustment scale for seven out of nine basins is found to be the distributed, hourly scale, despite the fact that several independent validation results at this adjustment scale indicated the occurrence of overfitting. Basins with highly correlated interior and outlet flows tend to be less sensitive to the adjustment scale and could benefit more from streamflow assimilation. In comparison to outlet flow assimilation

  11. Improved Methodology for Parameter Inference in Nonlinear, Hydrologic Regression Models

    Science.gov (United States)

    Bates, Bryson C.

    1992-01-01

    A new method is developed for the construction of reliable marginal confidence intervals and joint confidence regions for the parameters of nonlinear, hydrologic regression models. A parameter power transformation is combined with measures of the asymptotic bias and asymptotic skewness of maximum likelihood estimators to determine the transformation constants which cause the bias or skewness to vanish. These optimized constants are used to construct confidence intervals and regions for the transformed model parameters using linear regression theory. The resulting confidence intervals and regions can be easily mapped into the original parameter space to give close approximations to likelihood method confidence intervals and regions for the model parameters. Unlike many other approaches to parameter transformation, the procedure does not use a grid search to find the optimal transformation constants. An example involving the fitting of the Michaelis-Menten model to velocity-discharge data from an Australian gauging station is used to illustrate the usefulness of the methodology.

  12. A simulation of water pollution model parameter estimation

    Science.gov (United States)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  13. On retrial queueing model with fuzzy parameters

    Science.gov (United States)

    Ke, Jau-Chuan; Huang, Hsin-I.; Lin, Chuen-Horng

    2007-01-01

    This work constructs the membership functions of the system characteristics of a retrial queueing model with fuzzy customer arrival, retrial and service rates. The α-cut approach is used to transform a fuzzy retrial-queue into a family of conventional crisp retrial queues in this context. By means of the membership functions of the system characteristics, a set of parametric non-linear programs is developed to describe the family of crisp retrial queues. A numerical example is solved successfully to illustrate the validity of the proposed approach. Because the system characteristics are expressed and governed by the membership functions, more information is provided for use by management. By extending this model to the fuzzy environment, fuzzy retrial-queue is represented more accurately and analytic results are more useful for system designers and practitioners.

  14. Solar parameters for modeling interplanetary background

    CERN Document Server

    Bzowski, M; Tokumaru, M; Fujiki, K; Quemerais, E; Lallement, R; Ferron, S; Bochsler, P; McComas, D J

    2011-01-01

    The goal of the Fully Online Datacenter of Ultraviolet Emissions (FONDUE) Working Team of the International Space Science Institute in Bern, Switzerland, was to establish a common calibration of various UV and EUV heliospheric observations, both spectroscopic and photometric. Realization of this goal required an up-to-date model of spatial distribution of neutral interstellar hydrogen in the heliosphere, and to that end, a credible model of the radiation pressure and ionization processes was needed. This chapter describes the solar factors shaping the distribution of neutral interstellar H in the heliosphere. Presented are the solar Lyman-alpha flux and the solar Lyman-alpha resonant radiation pressure force acting on neutral H atoms in the heliosphere, solar EUV radiation and the photoionization of heliospheric hydrogen, and their evolution in time and the still hypothetical variation with heliolatitude. Further, solar wind and its evolution with solar activity is presented in the context of the charge excha...

  15. Linear Sigma Models With Strongly Coupled Phases -- One Parameter Models

    CERN Document Server

    Hori, Kentaro

    2013-01-01

    We systematically construct a class of two-dimensional $(2,2)$ supersymmetric gauged linear sigma models with phases in which a continuous subgroup of the gauge group is totally unbroken. We study some of their properties by employing a recently developed technique. The focus of the present work is on models with one K\\"ahler parameter. The models include those corresponding to Calabi-Yau threefolds, extending three examples found earlier by a few more, as well as Calabi-Yau manifolds of other dimensions and non-Calabi-Yau manifolds. The construction leads to predictions of equivalences of D-brane categories, systematically extending earlier examples. There is another type of surprise. Two distinct superconformal field theories corresponding to Calabi-Yau threefolds with different Hodge numbers, $h^{2,1}=23$ versus $h^{2,1}=59$, have exactly the same quantum K\\"ahler moduli space. The strong-weak duality plays a crucial r\\^ole in confirming this, and also is useful in the actual computation of the metric on t...

  16. Parameter identification in tidal models with uncertain boundaries

    NARCIS (Netherlands)

    Bagchi, Arunabha; ten Brummelhuis, P.G.J.; ten Brummelhuis, Paul

    1994-01-01

    In this paper we consider a simultaneous state and parameter estimation procedure for tidal models with random inputs, which is formulated as a minimization problem. It is assumed that some model parameters are unknown and that the random noise inputs only act upon the open boundaries. The

  17. Exploring the interdependencies between parameters in a material model.

    Energy Technology Data Exchange (ETDEWEB)

    Silling, Stewart Andrew; Fermen-Coker, Muge

    2014-01-01

    A method is investigated to reduce the number of numerical parameters in a material model for a solid. The basis of the method is to detect interdependencies between parameters within a class of materials of interest. The method is demonstrated for a set of material property data for iron and steel using the Johnson-Cook plasticity model.

  18. An Alternative Three-Parameter Logistic Item Response Model.

    Science.gov (United States)

    Pashley, Peter J.

    Birnbaum's three-parameter logistic function has become a common basis for item response theory modeling, especially within situations where significant guessing behavior is evident. This model is formed through a linear transformation of the two-parameter logistic function in order to facilitate a lower asymptote. This paper discusses an…

  19. Parameter identification in tidal models with uncertain boundaries

    NARCIS (Netherlands)

    Bagchi, Arunabha; Brummelhuis, ten Paul

    1994-01-01

    In this paper we consider a simultaneous state and parameter estimation procedure for tidal models with random inputs, which is formulated as a minimization problem. It is assumed that some model parameters are unknown and that the random noise inputs only act upon the open boundaries. The hyperboli

  20. A compact cyclic plasticity model with parameter evolution

    DEFF Research Database (Denmark)

    Krenk, Steen; Tidemann, L.

    2017-01-01

    , and it is demonstrated that this simple formulation enables very accurate representation of experimental results. An extension of the theory to account for model parameter evolution effects, e.g. in the form of changing yield level, is included in the form of extended evolution equations for the model parameters...

  1. Model Minority Stereotyping, Perceived Discrimination, and Adjustment Among Adolescents from Asian American Backgrounds.

    Science.gov (United States)

    Kiang, Lisa; Witkow, Melissa R; Thompson, Taylor L

    2016-07-01

    The model minority image is a common and pervasive stereotype that Asian American adolescents must navigate. Using multiwave data from 159 adolescents from Asian American backgrounds (mean age at initial recruitment = 15.03, SD = .92; 60 % female; 74 % US-born), the current study targeted unexplored aspects of the model minority experience in conjunction with more traditionally measured experiences of negative discrimination. When examining normative changes, perceptions of model minority stereotyping increased over the high school years while perceptions of discrimination decreased. Both experiences were not associated with each other, suggesting independent forms of social interactions. Model minority stereotyping generally promoted academic and socioemotional adjustment, whereas discrimination hindered outcomes. Moreover, in terms of academic adjustment, the model minority stereotype appears to protect against the detrimental effect of discrimination. Implications of the complex duality of adolescents' social interactions are discussed.

  2. Regionalization of SWAT Model Parameters for Use in Ungauged Watersheds

    Directory of Open Access Journals (Sweden)

    Indrajeet Chaubey

    2010-11-01

    Full Text Available There has been a steady shift towards modeling and model-based approaches as primary methods of assessing watershed response to hydrologic inputs and land management, and of quantifying watershed-wide best management practice (BMP effectiveness. Watershed models often require some degree of calibration and validation to achieve adequate watershed and therefore BMP representation. This is, however, only possible for gauged watersheds. There are many watersheds for which there are very little or no monitoring data available, thus the question as to whether it would be possible to extend and/or generalize model parameters obtained through calibration of gauged watersheds to ungauged watersheds within the same region. This study explored the possibility of developing regionalized model parameter sets for use in ungauged watersheds. The study evaluated two regionalization methods: global averaging, and regression-based parameters, on the SWAT model using data from priority watersheds in Arkansas. Resulting parameters were tested and model performance determined on three gauged watersheds. Nash-Sutcliffe efficiencies (NS for stream flow obtained using regression-based parameters (0.53–0.83 compared well with corresponding values obtained through model calibration (0.45–0.90. Model performance obtained using global averaged parameter values was also generally acceptable (0.4 ≤ NS ≤ 0.75. Results from this study indicate that regionalized parameter sets for the SWAT model can be obtained and used for making satisfactory hydrologic response predictions in ungauged watersheds.

  3. Validating Mechanistic Sorption Model Parameters and Processes for Reactive Transport in Alluvium

    Energy Technology Data Exchange (ETDEWEB)

    Zavarin, M; Roberts, S K; Rose, T P; Phinney, D L

    2002-05-02

    The laboratory batch and flow-through experiments presented in this report provide a basis for validating the mechanistic surface complexation and ion exchange model we use in our hydrologic source term (HST) simulations. Batch sorption experiments were used to examine the effect of solution composition on sorption. Flow-through experiments provided for an analysis of the transport behavior of sorbing elements and tracers which includes dispersion and fluid accessibility effects. Analysis of downstream flow-through column fluids allowed for evaluation of weakly-sorbing element transport. Secondary Ion Mass Spectrometry (SIMS) analysis of the core after completion of the flow-through experiments permitted the evaluation of transport of strongly sorbing elements. A comparison between these data and model predictions provides additional constraints to our model and improves our confidence in near-field HST model parameters. In general, cesium, strontium, samarium, europium, neptunium, and uranium behavior could be accurately predicted using our mechanistic approach but only after some adjustment was made to the model parameters. The required adjustments included a reduction in strontium affinity for smectite, an increase in cesium affinity for smectite and illite, a reduction in iron oxide and calcite reactive surface area, and a change in clinoptilolite reaction constants to reflect a more recently published set of data. In general, these adjustments are justifiable because they fall within a range consistent with our understanding of the parameter uncertainties. These modeling results suggest that the uncertainty in the sorption model parameters must be accounted for to validate the mechanistic approach. The uncertainties in predicting the sorptive behavior of U-1a and UE-5n alluvium also suggest that these uncertainties must be propagated to nearfield HST and large-scale corrective action unit (CAU) models.

  4. NWP model forecast skill optimization via closure parameter variations

    Science.gov (United States)

    Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.

    2012-04-01

    We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.

  5. Transferability of calibrated microsimulation model parameters for safety assessment using simulated conflicts.

    Science.gov (United States)

    Essa, Mohamed; Sayed, Tarek

    2015-11-01

    Several studies have investigated the relationship between field-measured conflicts and the conflicts obtained from micro-simulation models using the Surrogate Safety Assessment Model (SSAM). Results from recent studies have shown that while reasonable correlation between simulated and real traffic conflicts can be obtained especially after proper calibration, more work is still needed to confirm that simulated conflicts provide safety measures beyond what can be expected from exposure. As well, the results have emphasized that using micro-simulation model to evaluate safety without proper model calibration should be avoided. The calibration process adjusts relevant simulation parameters to maximize the correlation between field-measured and simulated conflicts. The main objective of this study is to investigate the transferability of calibrated parameters of the traffic simulation model (VISSIM) for safety analysis between different sites. The main purpose is to examine whether the calibrated parameters, when applied to other sites, give reasonable results in terms of the correlation between the field-measured and the simulated conflicts. Eighty-three hours of video data from two signalized intersections in Surrey, BC were used in this study. Automated video-based computer vision techniques were used to extract vehicle trajectories and identify field-measured rear-end conflicts. Calibrated VISSIM parameters obtained from the first intersection which maximized the correlation between simulated and field-observed conflicts were used to estimate traffic conflicts at the second intersection and to compare the results to parameters optimized specifically for the second intersection. The results show that the VISSIM parameters are generally transferable between the two locations as the transferred parameters provided better correlation between simulated and field-measured conflicts than using the default VISSIM parameters. Of the six VISSIM parameters identified as

  6. Bayesian estimation of parameters in a regional hydrological model

    Directory of Open Access Journals (Sweden)

    K. Engeland

    2002-01-01

    Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis

  7. Glacial isostatic adjustment model with composite 3-D Earth rheology for Fennoscandia

    NARCIS (Netherlands)

    Van der Wal, W.; Barnhoorn, A.; Stocchi, P.; Gradmann, S.; Wu, P.; Drury, M.; Vermeersen, L.L.A.

    2013-01-01

    Models for glacial isostatic adjustment (GIA) can provide constraints on rheology of the mantle if past ice thickness variations are assumed to be known. The Pleistocene ice loading histories that are used to obtain such constraints are based on an a priori 1-D mantle viscosity profile that assumes

  8. A Threshold Model of Social Support, Adjustment, and Distress after Breast Cancer Treatment

    Science.gov (United States)

    Mallinckrodt, Brent; Armer, Jane M.; Heppner, P. Paul

    2012-01-01

    This study examined a threshold model that proposes that social support exhibits a curvilinear association with adjustment and distress, such that support in excess of a critical threshold level has decreasing incremental benefits. Women diagnosed with a first occurrence of breast cancer (N = 154) completed survey measures of perceived support…

  9. A Four-Part Model of Autonomy during Emerging Adulthood: Associations with Adjustment

    Science.gov (United States)

    Lamborn, Susie D.; Groh, Kelly

    2009-01-01

    We found support for a four-part model of autonomy that links connectedness, separation, detachment, and agency to adjustment during emerging adulthood. Based on self-report surveys of 285 American college students, expected associations among the autonomy variables were found. In addition, agency, as measured by self-reliance, predicted lower…

  10. Designing a model to improve first year student adjustment to university

    Directory of Open Access Journals (Sweden)

    Nasrin Nikfal Azar

    2014-05-01

    Full Text Available The increase in the number of universities for the last decade in Iran increases the need for higher education institutions to manage their enrollment, more effectively. The purpose of this study is to design a model to improve the first year university student adjustment by examining the effects of academic self-efficacy, academic motivation, satisfaction, high school GPA and demographic variables on student’s adjustment to university. The study selects a sample of 357 students out of 4585 bachelor first year student who were enrolled in different programs. Three questionnaires were used for collection of data for this study, namely academic self-efficacy, academic motivation and student satisfaction with university. Structural equation modeling was employed using AMOS version7.16 to test the adequacy of the hypothesized model. Inclusion of additional relationship in the initial model improved the goodness indices considerably. The results suggest that academic self-efficacy were related positively to adjustment, both directly (B=0.35 and indirectly through student satisfaction (B=0.14 and academic motivation (B=0.9. The results indicate a need to develop programs that effectively promote the self-efficacy of first year student of student to increase college adjustment and consequently retention rate.

  11. A Study of Perfectionism, Attachment, and College Student Adjustment: Testing Mediational Models.

    Science.gov (United States)

    Hood, Camille A.; Kubal, Anne E.; Pfaller, Joan; Rice, Kenneth G.

    Mediational models predicting college students' adjustment were tested using regression analyses. Contemporary adult attachment theory was employed to explore the cognitive/affective mechanisms by which adult attachment and perfectionism affect various aspects of psychological functioning. Consistent with theoretical expectations, results…

  12. On-board adaptive model for state of charge estimation of lithium-ion batteries based on Kalman filter with proportional integral-based error adjustment

    Science.gov (United States)

    Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai

    2017-10-01

    With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.

  13. Linear regression models of floor surface parameters on friction between Neolite and quarry tiles.

    Science.gov (United States)

    Chang, Wen-Ruey; Matz, Simon; Grönqvist, Raoul; Hirvonen, Mikko

    2010-01-01

    For slips and falls, friction is widely used as an indicator of surface slipperiness. Surface parameters, including surface roughness and waviness, were shown to influence friction by correlating individual surface parameters with the measured friction. A collective input from multiple surface parameters as a predictor of friction, however, could provide a broader perspective on the contributions from all the surface parameters evaluated. The objective of this study was to develop regression models between the surface parameters and measured friction. The dynamic friction was measured using three different mixtures of glycerol and water as contaminants. Various surface roughness and waviness parameters were measured using three different cut-off lengths. The regression models indicate that the selected surface parameters can predict the measured friction coefficient reliably in most of the glycerol concentrations and cut-off lengths evaluated. The results of the regression models were, in general, consistent with those obtained from the correlation between individual surface parameters and the measured friction in eight out of nine conditions evaluated in this experiment. A hierarchical regression model was further developed to evaluate the cumulative contributions of the surface parameters in the final iteration by adding these parameters to the regression model one at a time from the easiest to measure to the most difficult to measure and evaluating their impacts on the adjusted R(2) values. For practical purposes, the surface parameter R(a) alone would account for the majority of the measured friction even if it did not reach a statistically significant level in some of the regression models.

  14. State and Parameter Estimation for a Coupled Ocean--Atmosphere Model

    Science.gov (United States)

    Ghil, M.; Kondrashov, D.; Sun, C.

    2006-12-01

    The El-Nino/Southern-Oscillation (ENSO) dominates interannual climate variability and plays, therefore, a key role in seasonal-to-interannual prediction. Much is known by now about the main physical mechanisms that give rise to and modulate ENSO, but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean--atmosphere model of ENSO. The coupled model consists of an upper-ocean, reduced-gravity model of the Tropical Pacific and a steady-state atmospheric response to the sea surface temperature (SST). The model errors are assumed to be mainly in the atmospheric wind stress, and assimilated data are equatorial Pacific SSTs. Model behavior is very sensitive to two key parameters: (i) μ, the ocean-atmosphere coupling coefficient between SST and wind stress anomalies; and (ii) δs, the surface-layer coefficient. Previous work has shown that δs determines the period of the model's self-sustained oscillation, while μ measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Estimation of these parameters is tested first on synthetic data and allows us to recover the delayed-oscillator mode starting from model parameter values that correspond to the westward-propagating case. Assimilation of SST data from the NCEP-NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean--atmosphere GCMs will be discussed.

  15. Systematic review of risk adjustment models of hospital length of stay (LOS).

    Science.gov (United States)

    Lu, Mingshan; Sajobi, Tolulope; Lucyk, Kelsey; Lorenzetti, Diane; Quan, Hude

    2015-04-01

    Policy decisions in health care, such as hospital performance evaluation and performance-based budgeting, require an accurate prediction of hospital length of stay (LOS). This paper provides a systematic review of risk adjustment models for hospital LOS, and focuses primarily on studies that use administrative data. MEDLINE, EMBASE, Cochrane, PubMed, and EconLit were searched for studies that tested the performance of risk adjustment models in predicting hospital LOS. We included studies that tested models developed for the general inpatient population, and excluded those that analyzed risk factors only correlated with LOS, impact analyses, or those that used disease-specific scales and indexes to predict LOS. Our search yielded 3973 abstracts, of which 37 were included. These studies used various disease groupers and severity/morbidity indexes to predict LOS. Few models were developed specifically for explaining hospital LOS; most focused primarily on explaining resource spending and the costs associated with hospital LOS, and applied these models to hospital LOS. We found a large variation in predictive power across different LOS predictive models. The best model performance for most studies fell in the range of 0.30-0.60, approximately. The current risk adjustment methodologies for predicting LOS are still limited in terms of models, predictors, and predictive power. One possible approach to improving the performance of LOS risk adjustment models is to include more disease-specific variables, such as disease-specific or condition-specific measures, and functional measures. For this approach, however, more comprehensive and standardized data are urgently needed. In addition, statistical methods and evaluation tools more appropriate to LOS should be tested and adopted.

  16. Some tests for parameter constancy in cointegrated VAR-models

    DEFF Research Database (Denmark)

    Hansen, Henrik; Johansen, Søren

    1999-01-01

    Some methods for the evaluation of parameter constancy in vector autoregressive (VAR) models are discussed. Two different ways of re-estimating the VAR model are proposed; one in which all parameters are estimated recursively based upon the likelihood function for the first observations, and anot...... be applied to test the constancy of the long-run parameters in the cointegrated VAR-model. All results are illustrated using a model for the term structure of interest rates on US Treasury securities. ...

  17. Spatio-temporal modeling of nonlinear distributed parameter systems

    CERN Document Server

    Li, Han-Xiong

    2011-01-01

    The purpose of this volume is to provide a brief review of the previous work on model reduction and identifi cation of distributed parameter systems (DPS), and develop new spatio-temporal models and their relevant identifi cation approaches. In this book, a systematic overview and classifi cation on the modeling of DPS is presented fi rst, which includes model reduction, parameter estimation and system identifi cation. Next, a class of block-oriented nonlinear systems in traditional lumped parameter systems (LPS) is extended to DPS, which results in the spatio-temporal Wiener and Hammerstein s

  18. Unscented Kalman filter with parameter identifiability analysis for the estimation of multiple parameters in kinetic models

    Directory of Open Access Journals (Sweden)

    Baker Syed

    2011-01-01

    Full Text Available Abstract In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF, rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison.

  19. Unscented Kalman filter with parameter identifiability analysis for the estimation of multiple parameters in kinetic models.

    Science.gov (United States)

    Baker, Syed Murtuza; Poskar, C Hart; Junker, Björn H

    2011-10-11

    In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison.

  20. Relationship between Cole-Cole model parameters and spectral decomposition parameters derived from SIP data

    Science.gov (United States)

    Weigand, M.; Kemna, A.

    2016-06-01

    Spectral induced polarization (SIP) data are commonly analysed using phenomenological models. Among these models the Cole-Cole (CC) model is the most popular choice to describe the strength and frequency dependence of distinct polarization peaks in the data. More flexibility regarding the shape of the spectrum is provided by decomposition schemes. Here the spectral response is decomposed into individual responses of a chosen elementary relaxation model, mathematically acting as kernel in the involved integral, based on a broad range of relaxation times. A frequently used kernel function is the Debye model, but also the CC model with some other a priorly specified frequency dispersion (e.g. Warburg model) has been proposed as kernel in the decomposition. The different decomposition approaches in use, also including conductivity and resistivity formulations, pose the question to which degree the integral spectral parameters typically derived from the obtained relaxation time distribution are biased by the approach itself. Based on synthetic SIP data sampled from an ideal CC response, we here investigate how the two most important integral output parameters deviate from the corresponding CC input parameters. We find that the total chargeability may be underestimated by up to 80 per cent and the mean relaxation time may be off by up to three orders of magnitude relative to the original values, depending on the frequency dispersion of the analysed spectrum and the proximity of its peak to the frequency range limits considered in the decomposition. We conclude that a quantitative comparison of SIP parameters across different studies, or the adoption of parameter relationships from other studies, for example when transferring laboratory results to the field, is only possible on the basis of a consistent spectral analysis procedure. This is particularly important when comparing effective CC parameters with spectral parameters derived from decomposition results.

  1. Identification of hydrological model parameter variation using ensemble Kalman filter

    Science.gov (United States)

    Deng, Chao; Liu, Pan; Guo, Shenglian; Li, Zejun; Wang, Dingbao

    2016-12-01

    Hydrological model parameters play an important role in the ability of model prediction. In a stationary context, parameters of hydrological models are treated as constants; however, model parameters may vary with time under climate change and anthropogenic activities. The technique of ensemble Kalman filter (EnKF) is proposed to identify the temporal variation of parameters for a two-parameter monthly water balance model (TWBM) by assimilating the runoff observations. Through a synthetic experiment, the proposed method is evaluated with time-invariant (i.e., constant) parameters and different types of parameter variations, including trend, abrupt change and periodicity. Various levels of observation uncertainty are designed to examine the performance of the EnKF. The results show that the EnKF can successfully capture the temporal variations of the model parameters. The application to the Wudinghe basin shows that the water storage capacity (SC) of the TWBM model has an apparent increasing trend during the period from 1958 to 2000. The identified temporal variation of SC is explained by land use and land cover changes due to soil and water conservation measures. In contrast, the application to the Tongtianhe basin shows that the estimated SC has no significant variation during the simulation period of 1982-2013, corresponding to the relatively stationary catchment properties. The evapotranspiration parameter (C) has temporal variations while no obvious change patterns exist. The proposed method provides an effective tool for quantifying the temporal variations of the model parameters, thereby improving the accuracy and reliability of model simulations and forecasts.

  2. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  3. Universally sloppy parameter sensitivities in systems biology models.

    Directory of Open Access Journals (Sweden)

    Ryan N Gutenkunst

    2007-10-01

    Full Text Available Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  4. Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics

    Directory of Open Access Journals (Sweden)

    Guanqun eZhang

    2011-11-01

    Full Text Available A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel while being defined by only a few parameters (unlike comprehensive distributed-parameter models. As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications.

  5. Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics

    Science.gov (United States)

    Zhang, Guanqun; Hahn, Jin-Oh; Mukkamala, Ramakrishna

    2011-01-01

    A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel) while being defined by only a few parameters (unlike comprehensive distributed-parameter models). As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications. PMID:22053157

  6. Modeling and Parameter Estimation of a Small Wind Generation System

    Directory of Open Access Journals (Sweden)

    Carlos A. Ramírez Gómez

    2013-11-01

    Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.

  7. Parameter-adjusted stochastic resonance system for the aperiodic echo chirp signal in optimal FrFT domain

    Science.gov (United States)

    Lin, Li-feng; Yu, Lei; Wang, Huiqi; Zhong, Suchuan

    2017-02-01

    In order to improve the system performance for moving target detection and localization, this paper presents a new aperiodic chirp signal and additive noise driving stochastic dynamical system, in which the internal frequency has the linear variation matching with the driving frequency. By using the fractional Fourier transform (FrFT) operator with the optimal order, the proposed time-domain dynamical system is transformed into the equivalent FrFT-domain system driven by the periodic signal and noise. Therefore, system performance is conveniently analyzed from the view of output signal-to-noise ratio (SNR) in optimal FrFT domain. Simulation results demonstrate that the output SNR, as a function of system parameter, shows the different generalized SR behaviors in the case of various internal parameters of driving chirp signal and external parameters of the moving target.

  8. Parameter estimation of hidden periodic model in random fields

    Institute of Scientific and Technical Information of China (English)

    何书元

    1999-01-01

    Two-dimensional hidden periodic model is an important model in random fields. The model is used in the field of two-dimensional signal processing, prediction and spectral analysis. A method of estimating the parameters for the model is designed. The strong consistency of the estimators is proved.

  9. Identification of parameters of discrete-continuous models

    Energy Technology Data Exchange (ETDEWEB)

    Cekus, Dawid, E-mail: cekus@imipkm.pcz.pl; Warys, Pawel, E-mail: warys@imipkm.pcz.pl [Institute of Mechanics and Machine Design Foundations, Czestochowa University of Technology, Dabrowskiego 73, 42-201 Czestochowa (Poland)

    2015-03-10

    In the paper, the parameters of a discrete-continuous model have been identified on the basis of experimental investigations and formulation of optimization problem. The discrete-continuous model represents a cantilever stepped Timoshenko beam. The mathematical model has been formulated and solved according to the Lagrange multiplier formalism. Optimization has been based on the genetic algorithm. The presented proceeding’s stages make the identification of any parameters of discrete-continuous systems possible.

  10. Estimating parameters for generalized mass action models with connectivity information

    Directory of Open Access Journals (Sweden)

    Voit Eberhard O

    2009-05-01

    Full Text Available Abstract Background Determining the parameters of a mathematical model from quantitative measurements is the main bottleneck of modelling biological systems. Parameter values can be estimated from steady-state data or from dynamic data. The nature of suitable data for these two types of estimation is rather different. For instance, estimations of parameter values in pathway models, such as kinetic orders, rate constants, flux control coefficients or elasticities, from steady-state data are generally based on experiments that measure how a biochemical system responds to small perturbations around the steady state. In contrast, parameter estimation from dynamic data requires time series measurements for all dependent variables. Almost no literature has so far discussed the combined use of both steady-state and transient data for estimating parameter values of biochemical systems. Results In this study we introduce a constrained optimization method for estimating parameter values of biochemical pathway models using steady-state information and transient measurements. The constraints are derived from the flux connectivity relationships of the system at the steady state. Two case studies demonstrate the estimation results with and without flux connectivity constraints. The unconstrained optimal estimates from dynamic data may fit the experiments well, but they do not necessarily maintain the connectivity relationships. As a consequence, individual fluxes may be misrepresented, which may cause problems in later extrapolations. By contrast, the constrained estimation accounting for flux connectivity information reduces this misrepresentation and thereby yields improved model parameters. Conclusion The method combines transient metabolic profiles and steady-state information and leads to the formulation of an inverse parameter estimation task as a constrained optimization problem. Parameter estimation and model selection are simultaneously carried out

  11. Parameter estimation for LLDPE gas-phase reactor models

    Directory of Open Access Journals (Sweden)

    G. A. Neumann

    2007-06-01

    Full Text Available Product development and advanced control applications require models with good predictive capability. However, in some cases it is not possible to obtain good quality phenomenological models due to the lack of data or the presence of important unmeasured effects. The use of empirical models requires less investment in modeling, but implies the need for larger amounts of experimental data to generate models with good predictive capability. In this work, nonlinear phenomenological and empirical models were compared with respect to their capability to predict the melt index and polymer yield of a low-density polyethylene production process consisting of two fluidized bed reactors connected in series. To adjust the phenomenological model, the optimization algorithms based on the flexible polyhedron method of Nelder and Mead showed the best efficiency. To adjust the empirical model, the PLS model was more appropriate for polymer yield, and the melt index needed more nonlinearity like the QPLS models. In the comparison between these two types of models better results were obtained for the empirical models.

  12. Evolution Scenarios at the Romanian Economy Level, Using the R.M. Solow Adjusted Model

    Directory of Open Access Journals (Sweden)

    Stelian Stancu

    2008-06-01

    Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans model etc., the R.M. Solow model is part of the category which characterizes the economic growth. The paper proposes the presentation of the R.M. Solow adjusted model with specific simulation characteristics and economic growth scenario. Considering these aspects, there are presented the values obtained at the economy level, behind the simulations, about the ratio Capital on the output volume, Output volume on employee, equal with the current labour efficiency, as well as the Labour efficiency value.

  13. Modelling the rate of change in a longitudinal study with missing data, adjusting for contact attempts.

    Science.gov (United States)

    Akacha, Mouna; Hutton, Jane L

    2011-05-10

    The Collaborative Ankle Support Trial (CAST) is a longitudinal trial of treatments for severe ankle sprains in which interest lies in the rate of improvement, the effectiveness of reminders and potentially informative missingness. A model is proposed for continuous longitudinal data with non-ignorable or informative missingness, taking into account the nature of attempts made to contact initial non-responders. The model combines a non-linear mixed model for the outcome model with logistic regression models for the reminder processes. A sensitivity analysis is used to contrast this model with the traditional selection model, where we adjust for missingness by modelling the missingness process. The conclusions that recovery is slower, and less satisfactory with age and more rapid with below knee cast than with a tubular bandage do not alter materially across all models investigated. The results also suggest that phone calls are most effective in retrieving questionnaires.

  14. Towards predictive food process models: A protocol for parameter estimation.

    Science.gov (United States)

    Vilas, Carlos; Arias-Méndez, Ana; Garcia, Miriam R; Alonso, Antonio A; Balsa-Canto, E

    2016-05-31

    Mathematical models, in particular, physics-based models, are essential tools to food product and process design, optimization and control. The success of mathematical models relies on their predictive capabilities. However, describing physical, chemical and biological changes in food processing requires the values of some, typically unknown, parameters. Therefore, parameter estimation from experimental data is critical to achieving desired model predictive properties. This work takes a new look into the parameter estimation (or identification) problem in food process modeling. First, we examine common pitfalls such as lack of identifiability and multimodality. Second, we present the theoretical background of a parameter identification protocol intended to deal with those challenges. And, to finish, we illustrate the performance of the proposed protocol with an example related to the thermal processing of packaged foods.

  15. Estimation of the input parameters in the Feller neuronal model

    Science.gov (United States)

    Ditlevsen, Susanne; Lansky, Petr

    2006-06-01

    The stochastic Feller neuronal model is studied, and estimators of the model input parameters, depending on the firing regime of the process, are derived. Closed expressions for the first two moments of functionals of the first-passage time (FTP) through a constant boundary in the suprathreshold regime are derived, which are used to calculate moment estimators. In the subthreshold regime, the exponentiality of the FTP is utilized to characterize the input parameters. The methods are illustrated on simulated data. Finally, approximations of the first-passage-time moments are suggested, and biological interpretations and comparisons of the parameters in the Feller and the Ornstein-Uhlenbeck models are discussed.

  16. An automatic and effective parameter optimization method for model tuning

    Directory of Open Access Journals (Sweden)

    T. Zhang

    2015-05-01

    Full Text Available Physical parameterizations in General Circulation Models (GCMs, having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.

  17. Optimal parameters for the FFA-Beddoes dynamic stall model

    Energy Technology Data Exchange (ETDEWEB)

    Bjoerck, A.; Mert, M. [FFA, The Aeronautical Research Institute of Sweden, Bromma (Sweden); Madsen, H.A. [Risoe National Lab., Roskilde (Denmark)

    1999-03-01

    Unsteady aerodynamic effects, like dynamic stall, must be considered in calculation of dynamic forces for wind turbines. Models incorporated in aero-elastic programs are of semi-empirical nature. Resulting aerodynamic forces therefore depend on values used for the semi-empiricial parameters. In this paper a study of finding appropriate parameters to use with the Beddoes-Leishman model is discussed. Minimisation of the `tracking error` between results from 2D wind tunnel tests and simulation with the model is used to find optimum values for the parameters. The resulting optimum parameters show a large variation from case to case. Using these different sets of optimum parameters in the calculation of blade vibrations, give rise to quite different predictions of aerodynamic damping which is discussed. (au)

  18. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines and other models applied to fast evaluation of struct......This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines and other models applied to fast evaluation...... response during excitation and the geometrical damping related to free vibrations of a hexagonal footing. The optimal order of a lumped-parameter model is determined for each degree of freedom, i.e. horizontal and vertical translation as well as torsion and rocking. In particular, the necessity of coupling...... between horizontal sliding and rocking is discussed....

  19. A New Approach for Parameter Optimization in Land Surface Model

    Institute of Scientific and Technical Information of China (English)

    LI Hongqi; GUO Weidong; SUN Guodong; ZHANG Yaocun; FU Congbin

    2011-01-01

    In this study,a new parameter optimization method was used to investigate the expansion of conditional nonlinear optimal perturbation (CNOP) in a land surface model (LSM) using long-term enhanced field observations at Tongyn station in Jilin Province,China,combined with a sophisticated LSM (common land model,CoLM).Tongyu station is a reference site of the international Coordinated Energy and Water Cycle Observations Project (CEOP) that has studied semiarid regions that have undergone desertification,salination,and degradation since late 1960s.In this study,three key land-surface parameters,namely,soil color,proportion of sand or clay in soil,and leaf-area index were chosen as parameters to be optimized.Our study comprised three experiments:First,a single-parameter optimization was performed,while the second and third experiments performed triple- and six-parameter optinizations,respectively.Notable improvements in simulating sensible heat flux (SH),latent heat flux (LH),soil temperature (TS),and moisture (MS) at shallow layers were achieved using the optimized parameters.The multiple-parameter optimization experiments performed better than the single-parameter experminent.All results demonstrate that the CNOP method can be used to optimize expanded parameters in an LSM.Moreover,clear mathematical meaning,simple design structure,and rapid computability give this method great potential for further application to parameter optimization in LSMs.

  20. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    . Second, it permits incorporation of prior information on parameter values. Third, it can be applied in the absence of copious data. Finally, it supplies measures of the capacity of the model to reproduce the historical record and the statistical significance of parameter estimates. The method is applied...

  1. Estimating winter wheat phenological parameters: Implications for crop modeling

    Science.gov (United States)

    Crop parameters, such as the timing of developmental events, are critical for accurate simulation results in crop simulation models, yet uncertainty often exists in determining the parameters. Factors contributing to the uncertainty include: a) sources of variation within a plant (i.e., within diffe...

  2. Adaptive Adjustment of Relaxation Parameters for Algebraic Reconstruction Technique and its Possible Application to Sparsity Prior X-ray CT Reconstruction

    CERN Document Server

    Saha, Sajib; Lambert, Andrew; Pickering, Mark

    2015-01-01

    In this paper, we systematically evaluate the performance of adaptive adjustment of the relaxation parameters of various iterative algorithms for X-ray CT reconstruction relying on sparsity priors. Sparsity prior has been found to be an efficient strategy in CT reconstruction where significantly fewer attenuation measurements are available. Sparsity prior CT reconstruction relies on iterative algorithms such as the algebraic reconstruction technique (ART) to produce a crude reconstruction based on which a sparse approximation is performed. Data driven adjustment of relaxation has been found to ensure better convergence than traditional relaxation for ART. In this paper, we study the performance of such data driven relaxation on a (CS) compressed sensing environment. State-of-the-art algorithms are implemented and their performance analyzed in regard to conventional and data-driven relaxation. Experiments are performed both on simulated and real environments. For the simulated case, experiments are conducted w...

  3. Complexity, parameter sensitivity and parameter transferability in the modelling of floodplain inundation

    Science.gov (United States)

    Bates, P. D.; Neal, J. C.; Fewtrell, T. J.

    2012-12-01

    In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound

  4. Retrospective forecast of ETAS model with daily parameters estimate

    Science.gov (United States)

    Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang

    2016-04-01

    We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.

  5. Parameter Estimates in Differential Equation Models for Population Growth

    Science.gov (United States)

    Winkel, Brian J.

    2011-01-01

    We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…

  6. Dynamic Modeling and Parameter Identification of Power Systems

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    @@ The generator, the excitation system, the steam turbine and speed governor, and the load are the so called four key models of power systems. Mathematical modeling and parameter identification for the four key models are of great importance as the basis for designing, operating, and analyzing power systems.

  7. Dynamic Load Model using PSO-Based Parameter Estimation

    Science.gov (United States)

    Taoka, Hisao; Matsuki, Junya; Tomoda, Michiya; Hayashi, Yasuhiro; Yamagishi, Yoshio; Kanao, Norikazu

    This paper presents a new method for estimating unknown parameters of dynamic load model as a parallel composite of a constant impedance load and an induction motor behind a series constant reactance. An adequate dynamic load model is essential for evaluating power system stability, and this model can represent the behavior of actual load by using appropriate parameters. However, the problem of this model is that a lot of parameters are necessary and it is not easy to estimate a lot of unknown parameters. We propose an estimating method based on Particle Swarm Optimization (PSO) which is a non-linear optimization method by using the data of voltage, active power and reactive power measured at voltage sag.

  8. Parameter Estimation for the Thurstone Case III Model.

    Science.gov (United States)

    Mackay, David B.; Chaiy, Seoil

    1982-01-01

    The ability of three estimation criteria to recover parameters of the Thurstone Case V and Case III models from comparative judgment data was investigated via Monte Carlo techniques. Significant differences in recovery are shown to exist. (Author/JKS)

  9. Optimal Parameter and Uncertainty Estimation of a Land Surface Model: Sensitivity to Parameter Ranges and Model Complexities

    Institute of Scientific and Technical Information of China (English)

    Youlong XIA; Zong-Liang YANG; Paul L. STOFFA; Mrinal K. SEN

    2005-01-01

    Most previous land-surface model calibration studies have defined global ranges for their parameters to search for optimal parameter sets. Little work has been conducted to study the impacts of realistic versus global ranges as well as model complexities on the calibration and uncertainty estimates. The primary purpose of this paper is to investigate these impacts by employing Bayesian Stochastic Inversion (BSI)to the Chameleon Surface Model (CHASM). The CHASM was designed to explore the general aspects of land-surface energy balance representation within a common modeling framework that can be run from a simple energy balance formulation to a complex mosaic type structure. The BSI is an uncertainty estimation technique based on Bayes theorem, importance sampling, and very fast simulated annealing.The model forcing data and surface flux data were collected at seven sites representing a wide range of climate and vegetation conditions. For each site, four experiments were performed with simple and complex CHASM formulations as well as realistic and global parameter ranges. Twenty eight experiments were conducted and 50 000 parameter sets were used for each run. The results show that the use of global and realistic ranges gives similar simulations for both modes for most sites, but the global ranges tend to produce some unreasonable optimal parameter values. Comparison of simple and complex modes shows that the simple mode has more parameters with unreasonable optimal values. Use of parameter ranges and model complexities have significant impacts on frequency distribution of parameters, marginal posterior probability density functions, and estimates of uncertainty of simulated sensible and latent heat fluxes.Comparison between model complexity and parameter ranges shows that the former has more significant impacts on parameter and uncertainty estimations.

  10. Detailed Theoretical Model for Adjustable Gain-Clamped Semiconductor Optical Amplifier

    Directory of Open Access Journals (Sweden)

    Lin Liu

    2012-01-01

    Full Text Available The adjustable gain-clamped semiconductor optical amplifier (AGC-SOA uses two SOAs in a ring-cavity topology: one to amplify the signal and the other to control the gain. The device was designed to maximize the output saturated power while adjusting gain to regulate power differences between packets without loss of linearity. This type of subsystem can be used for power equalisation and linear amplification in packet-based dynamic systems such as passive optical networks (PONs. A detailed theoretical model is presented in this paper to simulate the operation of the AGC-SOA, which gives a better understanding of the underlying gain clamping mechanics. Simulations and comparisons with steady-state and dynamic gain modulation experimental performance are given which validate the model.

  11. Comparing spatial and temporal transferability of hydrological model parameters

    Science.gov (United States)

    Patil, Sopan D.; Stieglitz, Marc

    2015-06-01

    Operational use of hydrological models requires the transfer of calibrated parameters either in time (for streamflow forecasting) or space (for prediction at ungauged catchments) or both. Although the effects of spatial and temporal parameter transfer on catchment streamflow predictions have been well studied individually, a direct comparison of these approaches is much less documented. Here, we compare three different schemes of parameter transfer, viz., temporal, spatial, and spatiotemporal, using a spatially lumped hydrological model called EXP-HYDRO at 294 catchments across the continental United States. Results show that the temporal parameter transfer scheme performs best, with lowest decline in prediction performance (median decline of 4.2%) as measured using the Kling-Gupta efficiency metric. More interestingly, negligible difference in prediction performance is observed between the spatial and spatiotemporal parameter transfer schemes (median decline of 12.4% and 13.9% respectively). We further demonstrate that the superiority of temporal parameter transfer scheme is preserved even when: (1) spatial distance between donor and receiver catchments is reduced, or (2) temporal lag between calibration and validation periods is increased. Nonetheless, increase in the temporal lag between calibration and validation periods reduces the overall performance gap between the three parameter transfer schemes. Results suggest that spatiotemporal transfer of hydrological model parameters has the potential to be a viable option for climate change related hydrological studies, as envisioned in the "trading space for time" framework. However, further research is still needed to explore the relationship between spatial and temporal aspects of catchment hydrological variability.

  12. Controlling chaos using Takagi-Sugeno fuzzy model and adaptive adjustment

    Institute of Scientific and Technical Information of China (English)

    Zheng Yong-Ai

    2006-01-01

    In this paper, an approach to the control of continuous-time chaotic systems is proposed using the Takagi-Sugeno (TS) fuzzy model and adaptive adjustment. Sufficient conditions are derived to guarantee chaos control from Lyapunov stability theory. The proposed approach offers a systematic design procedure for stabilizing a large class of chaotic systems in the literature about chaos research. The simulation results on R(o)ssler's system verify the effectiveness of the proposed methods.

  13. Solution model of nonlinear integral adjustment including different kinds of observing data with different precisions

    Institute of Scientific and Technical Information of China (English)

    郭金运; 陶华学

    2003-01-01

    In order to process different kinds of observing data with different precisions, a new solution model of nonlinear dynamic integral least squares adjustment was put forward, which is not dependent on their derivatives. The partial derivative of each component in the target function is not computed while iteratively solving the problem. Especially when the nonlinear target function is more complex and very difficult to solve the problem, the method can greatly reduce the computing load.

  14. Adjusting Felder-Silverman learning styles model for application in adaptive e-learning

    OpenAIRE

    Mihailović Đorđe; Despotović-Zrakić Marijana; Bogdanović Zorica; Barać Dušan; Vujin Vladimir

    2012-01-01

    This paper presents an approach for adjusting Felder-Silverman learning styles model for application in development of adaptive e-learning systems. Main goal of the paper is to improve the existing e-learning courses by developing a method for adaptation based on learning styles. The proposed method includes analysis of data related to students characteristics and applying the concept of personalization in creating e-learning courses. The research has been conducted at Faculty of organi...

  15. Steganography Algorithm in Different Colour Model Using an Energy Adjustment Applied with Discrete Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Carvajal-Gamez

    2012-09-01

    Full Text Available When color images are processed in different color model for implementing steganographic algorithms, is important to study the quality of the host and retrieved images, since it is typically used digital filters, visibly reaching deformed images. Using a steganographic algorithm, numerical calculations performed by the computer cause errors and alterations in the test images, so we apply a proposed scaling factor depending on the number of bits of the image to adjust these errors.

  16. Steganography Algorithm in Different Colour Model Using an Energy Adjustment Applied with Discrete Wavelet Transform

    Directory of Open Access Journals (Sweden)

    B.E. Carvajal-Gámez

    2012-08-01

    Full Text Available When color images are processed in different color model for implementing steganographic algorithms, is important to study the quality of the host and retrieved images, since it is typically used digital filters, visibly reaching deformed images. Using a steganographic algorithm, numerical calculations performed by the computer cause errors and alterations in the test images, so we apply a proposed scaling factor depending on the number of bits of the image to adjust these errors.

  17. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four

  18. Parameter estimation in stochastic rainfall-runoff models

    DEFF Research Database (Denmark)

    Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur

    2006-01-01

    the parameters, including the noise terms. The parameter estimation method is a maximum likelihood method (ML) where the likelihood function is evaluated using a Kalman filter technique. The ML method estimates the parameters in a prediction error settings, i.e. the sum of squared prediction error is minimized....... For a comparison the parameters are also estimated by an output error method, where the sum of squared simulation error is minimized. The former methodology is optimal for short-term prediction whereas the latter is optimal for simulations. Hence, depending on the purpose it is possible to select whether...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...

  19. Transformations among CE–CVM model parameters for multicomponent systems

    Indian Academy of Sciences (India)

    B Nageswara Sarma; Shrikant Lele

    2005-06-01

    In the development of thermodynamic databases for multicomponent systems using the cluster expansion–cluster variation methods, we need to have a consistent procedure for expressing the model parameters (CECs) of a higher order system in terms of those of the lower order subsystems and to an independent set of parameters which exclusively represent interactions of the higher order systems. Such a procedure is presented in detail in this communication. Furthermore, the details of transformations required to express the model parameters in one basis from those defined in another basis for the same system are also presented.

  20. SPOTting Model Parameters Using a Ready-Made Python Package.

    Science.gov (United States)

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  1. SPOTting Model Parameters Using a Ready-Made Python Package.

    Directory of Open Access Journals (Sweden)

    Tobias Houska

    Full Text Available The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool, an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI. We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  2. Numerical modeling of piezoelectric transducers using physical parameters.

    Science.gov (United States)

    Cappon, Hans; Keesman, Karel J

    2012-05-01

    Design of ultrasonic equipment is frequently facilitated with numerical models. These numerical models, however, need a calibration step, because usually not all characteristics of the materials used are known. Characterization of material properties combined with numerical simulations and experimental data can be used to acquire valid estimates of the material parameters. In our design application, a finite element (FE) model of an ultrasonic particle separator, driven by an ultrasonic transducer in thickness mode, is required. A limited set of material parameters for the piezoelectric transducer were obtained from the manufacturer, thus preserving prior physical knowledge to a large extent. The remaining unknown parameters were estimated from impedance analysis with a simple experimental setup combined with a numerical optimization routine using 2-D and 3-D FE models. Thus, a full set of physically interpretable material parameters was obtained for our specific purpose. The approach provides adequate accuracy of the estimates of the material parameters, near 1%. These parameter estimates will subsequently be applied in future design simulations, without the need to go through an entire series of characterization experiments. Finally, a sensitivity study showed that small variations of 1% in the main parameters caused changes near 1% in the eigenfrequency, but changes up to 7% in the admittance peak, thus influencing the efficiency of the system. Temperature will already cause these small variations in response; thus, a frequency control unit is required when actually manufacturing an efficient ultrasonic separation system.

  3. Parameter estimation and model selection in computational biology.

    Directory of Open Access Journals (Sweden)

    Gabriele Lillacci

    2010-03-01

    Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.

  4. An Effective Parameter Screening Strategy for High Dimensional Watershed Models

    Science.gov (United States)

    Khare, Y. P.; Martinez, C. J.; Munoz-Carpena, R.

    2014-12-01

    Watershed simulation models can assess the impacts of natural and anthropogenic disturbances on natural systems. These models have become important tools for tackling a range of water resources problems through their implementation in the formulation and evaluation of Best Management Practices, Total Maximum Daily Loads, and Basin Management Action Plans. For accurate applications of watershed models they need to be thoroughly evaluated through global uncertainty and sensitivity analyses (UA/SA). However, due to the high dimensionality of these models such evaluation becomes extremely time- and resource-consuming. Parameter screening, the qualitative separation of important parameters, has been suggested as an essential step before applying rigorous evaluation techniques such as the Sobol' and Fourier Amplitude Sensitivity Test (FAST) methods in the UA/SA framework. The method of elementary effects (EE) (Morris, 1991) is one of the most widely used screening methodologies. Some of the common parameter sampling strategies for EE, e.g. Optimized Trajectories [OT] (Campolongo et al., 2007) and Modified Optimized Trajectories [MOT] (Ruano et al., 2012), suffer from inconsistencies in the generated parameter distributions, infeasible sample generation time, etc. In this work, we have formulated a new parameter sampling strategy - Sampling for Uniformity (SU) - for parameter screening which is based on the principles of the uniformity of the generated parameter distributions and the spread of the parameter sample. A rigorous multi-criteria evaluation (time, distribution, spread and screening efficiency) of OT, MOT, and SU indicated that SU is superior to other sampling strategies. Comparison of the EE-based parameter importance rankings with those of Sobol' helped to quantify the qualitativeness of the EE parameter screening approach, reinforcing the fact that one should use EE only to reduce the resource burden required by FAST/Sobol' analyses but not to replace it.

  5. [Applying temporally-adjusted land use regression models to estimate ambient air pollution exposure during pregnancy].

    Science.gov (United States)

    Zhang, Y J; Xue, F X; Bai, Z P

    2017-03-06

    The impact of maternal air pollution exposure on offspring health has received much attention. Precise and feasible exposure estimation is particularly important for clarifying exposure-response relationships and reducing heterogeneity among studies. Temporally-adjusted land use regression (LUR) models are exposure assessment methods developed in recent years that have the advantage of having high spatial-temporal resolution. Studies on the health effects of outdoor air pollution exposure during pregnancy have been increasingly carried out using this model. In China, research applying LUR models was done mostly at the model construction stage, and findings from related epidemiological studies were rarely reported. In this paper, the sources of heterogeneity and research progress of meta-analysis research on the associations between air pollution and adverse pregnancy outcomes were analyzed. The methods of the characteristics of temporally-adjusted LUR models were introduced. The current epidemiological studies on adverse pregnancy outcomes that applied this model were systematically summarized. Recommendations for the development and application of LUR models in China are presented. This will encourage the implementation of more valid exposure predictions during pregnancy in large-scale epidemiological studies on the health effects of air pollution in China.

  6. Biologically Inspired Visual Model With Preliminary Cognition and Active Attention Adjustment.

    Science.gov (United States)

    Qiao, Hong; Xi, Xuanyang; Li, Yinlin; Wu, Wei; Li, Fengfu

    2015-11-01

    Recently, many computational models have been proposed to simulate visual cognition process. For example, the hierarchical Max-Pooling (HMAX) model was proposed according to the hierarchical and bottom-up structure of V1 to V4 in the ventral pathway of primate visual cortex, which could achieve position- and scale-tolerant recognition. In our previous work, we have introduced memory and association into the HMAX model to simulate visual cognition process. In this paper, we improve our theoretical framework by mimicking a more elaborate structure and function of the primate visual cortex. We will mainly focus on the new formation of memory and association in visual processing under different circumstances as well as preliminary cognition and active adjustment in the inferior temporal cortex, which are absent in the HMAX model. The main contributions of this paper are: 1) in the memory and association part, we apply deep convolutional neural networks to extract various episodic features of the objects since people use different features for object recognition. Moreover, to achieve a fast and robust recognition in the retrieval and association process, different types of features are stored in separated clusters and the feature binding of the same object is stimulated in a loop discharge manner and 2) in the preliminary cognition and active adjustment part, we introduce preliminary cognition to classify different types of objects since distinct neural circuits in a human brain are used for identification of various types of objects. Furthermore, active cognition adjustment of occlusion and orientation is implemented to the model to mimic the top-down effect in human cognition process. Finally, our model is evaluated on two face databases CAS-PEAL-R1 and AR. The results demonstrate that our model exhibits its efficiency on visual recognition process with much lower memory storage requirement and a better performance compared with the traditional purely computational

  7. Minor hysteresis loops model based on exponential parameters scaling of the modified Jiles-Atherton model

    Energy Technology Data Exchange (ETDEWEB)

    Hamimid, M., E-mail: Hamimid_mourad@hotmail.com [Laboratoire de modelisation des systemes energetiques LMSE, Universite de Biskra, BP 145, 07000 Biskra (Algeria); Mimoune, S.M., E-mail: s.m.mimoune@mselab.org [Laboratoire de modelisation des systemes energetiques LMSE, Universite de Biskra, BP 145, 07000 Biskra (Algeria); Feliachi, M., E-mail: mouloud.feliachi@univ-nantes.fr [IREENA-IUT, CRTT, 37 Boulevard de l' Universite, BP 406, 44602 Saint Nazaire Cedex (France)

    2012-07-01

    In this present work, the minor hysteresis loops model based on parameters scaling of the modified Jiles-Atherton model is evaluated by using judicious expressions. These expressions give the minor hysteresis loops parameters as a function of the major hysteresis loop ones. They have exponential form and are obtained by parameters identification using the stochastic optimization method 'simulated annealing'. The main parameters influencing the data fitting are three parameters, the pinning parameter k, the mean filed parameter {alpha} and the parameter which characterizes the shape of anhysteretic magnetization curve a. To validate this model, calculated minor hysteresis loops are compared with measured ones and good agreements are obtained.

  8. Glacial isostatic adjustment at the Laurentide ice sheet margin: Models and observations in the Great Lakes region

    Science.gov (United States)

    Braun, Alexander; Kuo, Chung-Yen; Shum, C. K.; Wu, Patrick; van der Wal, Wouter; Fotopoulos, Georgia

    2008-10-01

    Glacial Isostatic Adjustment (GIA) modelling in North America relies on relative sea level information which is primarily obtained from areas far away from the uplift region. The lack of accurate geodetic observations in the Great Lakes region, which is located in the transition zone between uplift and subsidence due to the deglaciation of the Laurentide ice sheet, has prevented more detailed studies of this former margin of the ice sheet. Recently, observations of vertical crustal motion from improved GPS network solutions and combined tide gauge and satellite altimetry solutions have become available. This study compares these vertical motion observations with predictions obtained from 70 different GIA models. The ice sheet margin is distinct from the centre and far field of the uplift because the sensitivity of the GIA process towards Earth parameters such as mantle viscosity is very different. Specifically, the margin area is most sensitive to the uppermost mantle viscosity and allows for better constraints of this parameter. The 70 GIA models compared herein have different ice loading histories (ICE-3/4/5G) and Earth parameters including lateral heterogeneities. The root-mean-square differences between the 6 best models and the two sets of observations (tide gauge/altimetry and GPS) are 0.66 and 1.57 mm/yr, respectively. Both sets of independent observations are highly correlated and show a very similar fit to the models, which indicates their consistent quality. Therefore, both data sets can be considered as a means for constraining and assessing the quality of GIA models in the Great Lakes region and the former margin of the Laurentide ice sheet.

  9. MODELING OF FUEL SPRAY CHARACTERISTICS AND DIESEL COMBUSTION CHAMBER PARAMETERS

    Directory of Open Access Journals (Sweden)

    G. M. Kukharonak

    2011-01-01

    Full Text Available The computer model for coordination of fuel spray characteristics with diesel combustion chamber parameters has been created in the paper.  The model allows to observe fuel sprays  develоpment in diesel cylinder at any moment of injection, to calculate characteristics of fuel sprays with due account of a shape and dimensions of a combustion chamber, timely to change fuel injection characteristics and supercharging parameters, shape and dimensions of a combustion chamber. Moreover the computer model permits to determine parameters of holes in an injector nozzle that provides the required fuel sprays characteristics at the stage of designing a diesel engine. Combustion chamber parameters for 4ЧН11/12.5 diesel engine have been determined in the paper.

  10. Mathematically Modeling Parameters Influencing Surface Roughness in CNC Milling

    Directory of Open Access Journals (Sweden)

    Engin Nas

    2012-01-01

    Full Text Available In this study, steel AISI 1050 is subjected to process of face milling in CNC milling machine and such parameters as cutting speed, feed rate, cutting tip, depth of cut influencing the surface roughness are investigated experimentally. Four different experiments are conducted by creating different combinations for parameters. In conducted experiments, cutting tools, which are coated by PVD method used in forcing steel and spheroidal graphite cast iron are used. Surface roughness values, which are obtained by using specified parameters with cutting tools, are measured and correlation between measured surface roughness values and parameters is modeled mathematically by using curve fitting algorithm. Mathematical models are evaluated according to coefficients of determination (R2 and the most ideal one is suggested for theoretical works. Mathematical models, which are proposed for each experiment, are estipulated.

  11. Regionalization parameters of conceptual rainfall-runoff model

    Science.gov (United States)

    Osuch, M.

    2003-04-01

    Main goal of this study was to develop techniques for the a priori estimation parameters of hydrological model. Conceptual hydrological model CLIRUN was applied to around 50 catchment in Poland. The size of catchments range from 1 000 to 100 000 km2. The model was calibrated for a number of gauged catchments with different catchment characteristics. The parameters of model were related to different climatic and physical catchment characteristics (topography, land use, vegetation and soil type). The relationships were tested by comparing observed and simulated runoff series from the gauged catchment that were not used in the calibration. The model performance using regional parameters was promising for most of the calibration and validation catchments.

  12. Improved Water Network Macroscopic Model Utilising Auto-Control Adjusting Valve by PLS

    Institute of Scientific and Technical Information of China (English)

    LI Xia; ZHAO Xinhua; WANG Xiaodong

    2005-01-01

    In order to overcome the low precision and weak applicability problems of the current municipal water network state simulation model, the water network structure is studied. Since the telemetry system has been applied increasingly in the water network, and in order to reflect the network operational condition more accurately, a new water network macroscopic model is developed by taking the auto-control adjusting valve opening state into consideration. Then for highly correlated or collinear independent variables in the model, the partial least squares (PLS) regression method provides a model solution which can distinguish between the system information and the noisy data. Finally, a hypothetical water network is introduced for validating the model. The simulation results show that the relative error is less than 5.2%, indicating that the model is efficient and feasible, and has better generalization performance.

  13. A novel parameter estimation method for metal oxide surge arrester models

    Indian Academy of Sciences (India)

    Mehdi Nafar; Gevork B Gharehpetian; Taher Niknam

    2011-12-01

    Accurate modelling and exact determination of Metal Oxide (MO) surge arrester parameters are very important for arrester allocation, insulation coordination studies and systems reliability calculations. In this paper, a new technique, which is the combination of Adaptive Particle Swarm Optimization (APSO) and Ant Colony Optimization (ACO) algorithms and linking the MATLAB and EMTP, is proposed to estimate the parameters of MO surge arrester models. The proposed algorithm is named Modified Adaptive Particle Swarm Optimization (MAPSO). In the proposed algorithm, to overcome the drawback of the PSO algorithm (convergence to local optima), the inertia weight is tuned by using fuzzy rules and the cognitive and the social parameters are self-adaptively adjusted. Also, to improve the global search capability and prevent the convergence to local minima, ACO algorithm is combined to the proposed APSO algorithm. The transient models of MO surge arrester have been simulated by using ATP-EMTP. The results of simulations have been applied to the program, which is based on MAPSO algorithm and can determine the fitness and parameters of different models. The validity and the accuracy of estimated parameters of surge arrester models are assessed by comparing the predicted residual voltage with experimental results.

  14. Mathematical Model for the Selection of Processing Parameters in Selective Laser Sintering of Polymer Products

    Directory of Open Access Journals (Sweden)

    Ana Pilipović

    2014-03-01

    Full Text Available Additive manufacturing (AM is increasingly applied in the development projects from the initial idea to the finished product. The reasons are multiple, but what should be emphasised is the possibility of relatively rapid manufacturing of the products of complicated geometry based on the computer 3D model of the product. There are numerous limitations primarily in the number of available materials and their properties, which may be quite different from the properties of the material of the finished product. Therefore, it is necessary to know the properties of the product materials. In AM procedures the mechanical properties of materials are affected by the manufacturing procedure and the production parameters. During SLS procedures it is possible to adjust various manufacturing parameters which are used to influence the improvement of various mechanical and other properties of the products. The paper sets a new mathematical model to determine the influence of individual manufacturing parameters on the polymer product made by selective laser sintering. Old mathematical model is checked by statistical method with central composite plan and it is established that old mathematical model must be expanded with new parameter beam overlay ratio. Verification of new mathematical model and optimization of the processing parameters are made on SLS machine.

  15. Weibull Parameters Estimation Based on Physics of Failure Model

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... distribution. Methods from structural reliability analysis are used to model the uncertainties and to assess the reliability for fatigue failure. Maximum Likelihood and Least Square estimation techniques are used to estimate fatigue life distribution parameters....

  16. Efficient probabilistic model personalization integrating uncertainty on data and parameters: Application to eikonal-diffusion models in cardiac electrophysiology.

    Science.gov (United States)

    Konukoglu, Ender; Relan, Jatin; Cilingir, Ulas; Menze, Bjoern H; Chinchapatnam, Phani; Jadidi, Amir; Cochet, Hubert; Hocini, Mélèze; Delingette, Hervé; Jaïs, Pierre; Haïssaguerre, Michel; Ayache, Nicholas; Sermesant, Maxime

    2011-10-01

    Biophysical models are increasingly used for medical applications at the organ scale. However, model predictions are rarely associated with a confidence measure although there are important sources of uncertainty in computational physiology methods. For instance, the sparsity and noise of the clinical data used to adjust the model parameters (personalization), and the difficulty in modeling accurately soft tissue physiology. The recent theoretical progresses in stochastic models make their use computationally tractable, but there is still a challenge in estimating patient-specific parameters with such models. In this work we propose an efficient Bayesian inference method for model personalization using polynomial chaos and compressed sensing. This method makes Bayesian inference feasible in real 3D modeling problems. We demonstrate our method on cardiac electrophysiology. We first present validation results on synthetic data, then we apply the proposed method to clinical data. We demonstrate how this can help in quantifying the impact of the data characteristics on the personalization (and thus prediction) results. Described method can be beneficial for the clinical use of personalized models as it explicitly takes into account the uncertainties on the data and the model parameters while still enabling simulations that can be used to optimize treatment. Such uncertainty handling can be pivotal for the proper use of modeling as a clinical tool, because there is a crucial requirement to know the confidence one can have in personalized models.

  17. A Filled Function with Adjustable Parameters for Unconstrained Global Optimization%无约束全局优化的一个可调参数的填充函数

    Institute of Scientific and Technical Information of China (English)

    尚有林; 李晓燕

    2004-01-01

    A filled function with adjustable parameters is suggested in this paper for finding a global minimum point of a general class of nonlinear programming problems with a bounded and closed domain. This function has two adjustable parameters. We will discuss the properties of the proposed filled function. Conditions on this function and on the values of parameters are given so that the constructed function has the desired properties of traditional filled function.

  18. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-10

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis

  19. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rautenstrauch

    2004-09-10

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.

  20. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-06-27

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699

  1. Construction of constant-Q viscoelastic model with three parameters

    Institute of Scientific and Technical Information of China (English)

    SUN Cheng-yu; YIN Xing-yao

    2007-01-01

    The popularly used viscoelastic models have some shortcomings in describing relationship between quality factor (Q) and frequency, which is not consistent with the observation data. Based on the theory of viscoelasticity, a new approach to construct constant-Q viscoelastic model in given frequency band with three parameters is developed. The designed model describes the frequency-independence feature of quality factor very well, and the effect of viscoelasticity on seismic wave field can be studied relatively accurate in theory with this model. Furthermore, the number of required parameters in this model has been reduced fewer than that of other constant-Q models, this can simplify the solution of the viscoelastic problems to some extent. At last, the accuracy and application range have been analyzed through numerical tests. The effect of viscoelasticity on wave propagation has been briefly illustrated through the change of frequency spectra and waveform in several different viscoelastic models.

  2. Global-scale regionalization of hydrologic model parameters

    Science.gov (United States)

    Beck, Hylke E.; van Dijk, Albert I. J. M.; de Roo, Ad; Miralles, Diego G.; McVicar, Tim R.; Schellekens, Jaap; Bruijnzeel, L. Adrian

    2016-05-01

    Current state-of-the-art models typically applied at continental to global scales (hereafter called macroscale) tend to use a priori parameters, resulting in suboptimal streamflow (Q) simulation. For the first time, a scheme for regionalization of model parameters at the global scale was developed. We used data from a diverse set of 1787 small-to-medium sized catchments (10-10,000 km2) and the simple conceptual HBV model to set up and test the scheme. Each catchment was calibrated against observed daily Q, after which 674 catchments with high calibration and validation scores, and thus presumably good-quality observed Q and forcing data, were selected to serve as donor catchments. The calibrated parameter sets for the donors were subsequently transferred to 0.5° grid cells with similar climatic and physiographic characteristics, resulting in parameter maps for HBV with global coverage. For each grid cell, we used the 10 most similar donor catchments, rather than the single most similar donor, and averaged the resulting simulated Q, which enhanced model performance. The 1113 catchments not used as donors were used to independently evaluate the scheme. The regionalized parameters outperformed spatially uniform (i.e., averaged calibrated) parameters for 79% of the evaluation catchments. Substantial improvements were evident for all major Köppen-Geiger climate types and even for evaluation catchments > 5000 km distant from the donors. The median improvement was about half of the performance increase achieved through calibration. HBV with regionalized parameters outperformed nine state-of-the-art macroscale models, suggesting these might also benefit from the new regionalization scheme. The produced HBV parameter maps including ancillary data are available via www.gloh2o.org.

  3. Bayesian parameter estimation for nonlinear modelling of biological pathways

    Directory of Open Access Journals (Sweden)

    Ghasemi Omid

    2011-12-01

    Full Text Available Abstract Background The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. Results We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC method. We applied this approach to the biological pathways involved in the left ventricle (LV response to myocardial infarction (MI and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly

  4. Mirror symmetry for two-parameter models, 1

    CERN Document Server

    Candelas, Philip; Font, A; Katz, S; Morrison, Douglas Robert Ogston; Candelas, Philip; Ossa, Xenia de la; Font, Anamaria; Katz, Sheldon; Morrison, David R.

    1994-01-01

    We study, by means of mirror symmetry, the quantum geometry of the K\\"ahler-class parameters of a number of Calabi-Yau manifolds that have $b_{11}=2$. Our main interest lies in the structure of the moduli space and in the loci corresponding to singular models. This structure is considerably richer when there are two parameters than in the various one-parameter models that have been studied hitherto. We describe the intrinsic structure of the point in the (compactification of the) moduli space that corresponds to the large complex structure or classical limit. The instanton expansions are of interest owing to the fact that some of the instantons belong to families with continuous parameters. We compute the Yukawa couplings and their expansions in terms of instantons of genus zero. By making use of recent results of Bershadsky et al. we compute also the instanton numbers for instantons of genus one. For particular values of the parameters the models become birational to certain models with one parameter. The co...

  5. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    2007-01-01

    This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil with focus on the horizontal sliding and rocking. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines...

  6. Muscle parameters for musculoskeletal modelling of the human neck

    NARCIS (Netherlands)

    Borst, J.; Forbes, P.A.; Happee, R.; Veeger, H.E.J.

    2011-01-01

    Background: To study normal or pathological neuromuscular control, a musculoskeletal model of the neck has great potential but a complete and consistent anatomical dataset which comprises the muscle geometry parameters to construct such a model is not yet available. Methods: A dissection experiment

  7. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    2007-01-01

    This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil with focus on the horizontal sliding and rocking. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines...

  8. Multiplicity Control in Structural Equation Modeling: Incorporating Parameter Dependencies

    Science.gov (United States)

    Smith, Carrie E.; Cribbie, Robert A.

    2013-01-01

    When structural equation modeling (SEM) analyses are conducted, significance tests for all important model relationships (parameters including factor loadings, covariances, etc.) are typically conducted at a specified nominal Type I error rate ([alpha]). Despite the fact that many significance tests are often conducted in SEM, rarely is…

  9. Muscle parameters for musculoskeletal modelling of the human neck

    NARCIS (Netherlands)

    Borst, J.; Forbes, P.A.; Happee, R.; Veeger, H.E.J.

    2011-01-01

    Background: To study normal or pathological neuromuscular control, a musculoskeletal model of the neck has great potential but a complete and consistent anatomical dataset which comprises the muscle geometry parameters to construct such a model is not yet available. Methods: A dissection experiment

  10. Geometry parameters for musculoskeletal modelling of the shoulder system

    NARCIS (Netherlands)

    Van der Helm, F C; Veeger, DirkJan (H. E. J.); Pronk, G M; Van der Woude, L H; Rozendal, R H

    1992-01-01

    A dynamical finite-element model of the shoulder mechanism consisting of thorax, clavicula, scapula and humerus is outlined. The parameters needed for the model are obtained in a cadaver experiment consisting of both shoulders of seven cadavers. In this paper, in particular, the derivation of geomet

  11. Precise correction to parameter ρ in the littlest Higgs model

    Institute of Scientific and Technical Information of China (English)

    Farshid Tabbak; F.Farnoudi

    2008-01-01

    In this paper tree-level violation of weak isospin parameter,ρ in the flame of the littlest Higgs model is studied.The potentially large deviation from the standard model prediction for the ρ in terms of the littlest Higgs model parameters is calculated.The maximum value for ρ for f = 1 TeV,c = 0.05,c'= 0.05and v'= 1.5 GeV is ρ = 1.2973 which means a large enhancement than the SM.

  12. Comparative Analysis of Visco-elastic Models with Variable Parameters

    Directory of Open Access Journals (Sweden)

    Silviu Nastac

    2010-01-01

    Full Text Available The paper presents a theoretical comparative study for computational behaviour analysis of vibration isolation elements based on viscous and elastic models with variable parameters. The changing of elastic and viscous parameters can be produced by natural timed evolution demo-tion or by heating developed into the elements during their working cycle. It was supposed both linear and non-linear numerical viscous and elastic models, and their combinations. The results show the impor-tance of numerical model tuning with the real behaviour, as such the characteristics linearity, and the essential parameters for damping and rigidity. Multiple comparisons between linear and non-linear simulation cases dignify the basis of numerical model optimization regarding mathematical complexity vs. results reliability.

  13. Improvement of Continuous Hydrologic Models and HMS SMA Parameters Reduction

    Science.gov (United States)

    Rezaeian Zadeh, Mehdi; Zia Hosseinipour, E.; Abghari, Hirad; Nikian, Ashkan; Shaeri Karimi, Sara; Moradzadeh Azar, Foad

    2010-05-01

    Hydrological models can help us to predict stream flows and associated runoff volumes of rainfall events within a watershed. There are many different reasons why we need to model the rainfall-runoff processes of for a watershed. However, the main reason is the limitation of hydrological measurement techniques and the costs of data collection at a fine scale. Generally, we are not able to measure all that we would like to know about a given hydrological systems. This is very particularly the case for ungauged catchments. Since the ultimate aim of prediction using models is to improve decision-making about a hydrological problem, therefore, having a robust and efficient modeling tool becomes an important factor. Among several hydrologic modeling approaches, continuous simulation has the best predictions because it can model dry and wet conditions during a long-term period. Continuous hydrologic models, unlike event based models, account for a watershed's soil moisture balance over a long-term period and are suitable for simulating daily, monthly, and seasonal streamflows. In this paper, we describe a soil moisture accounting (SMA) algorithm added to the hydrologic modeling system (HEC-HMS) computer program. As is well known in the hydrologic modeling community one of the ways for improving a model utility is the reduction of input parameters. The enhanced model developed in this study is applied to Khosrow Shirin Watershed, located in the north-west part of Fars Province in Iran, a data limited watershed. The HMS SMA algorithm divides the potential path of rainfall onto a watershed into five zones. The results showed that the output of HMS SMA is insensitive with the variation of many parameters such as soil storage and soil percolation rate. The study's objective is to remove insensitive parameters from the model input using Multi-objective sensitivity analysis. Keywords: Continuous Hydrologic Modeling, HMS SMA, Multi-objective sensitivity analysis, SMA Parameters

  14. Remote Sensing-based Methodologies for Snow Model Adjustments in Operational Streamflow Prediction

    Science.gov (United States)

    Bender, S.; Miller, W. P.; Bernard, B.; Stokes, M.; Oaida, C. M.; Painter, T. H.

    2015-12-01

    Water management agencies rely on hydrologic forecasts issued by operational agencies such as NOAA's Colorado Basin River Forecast Center (CBRFC). The CBRFC has partnered with the Jet Propulsion Laboratory (JPL) under funding from NASA to incorporate research-oriented, remotely-sensed snow data into CBRFC operations and to improve the accuracy of CBRFC forecasts. The partnership has yielded valuable analysis of snow surface albedo as represented in JPL's MODIS Dust Radiative Forcing in Snow (MODDRFS) data, across the CBRFC's area of responsibility. When dust layers within a snowpack emerge, reducing the snow surface albedo, the snowmelt rate may accelerate. The CBRFC operational snow model (SNOW17) is a temperature-index model that lacks explicit representation of snowpack surface albedo. CBRFC forecasters monitor MODDRFS data for emerging dust layers and may manually adjust SNOW17 melt rates. A technique was needed for efficient and objective incorporation of the MODDRFS data into SNOW17. Initial development focused in Colorado, where dust-on-snow events frequently occur. CBRFC forecasters used retrospective JPL-CBRFC analysis and developed a quantitative relationship between MODDRFS data and mean areal temperature (MAT) data. The relationship was used to generate adjusted, MODDRFS-informed input for SNOW17. Impacts of the MODDRFS-SNOW17 MAT adjustment method on snowmelt-driven streamflow prediction varied spatially and with characteristics of the dust deposition events. The largest improvements occurred in southwestern Colorado, in years with intense dust deposition events. Application of the method in other regions of Colorado and in "low dust" years resulted in minimal impact. The MODDRFS-SNOW17 MAT technique will be implemented in CBRFC operations in late 2015, prior to spring 2016 runoff. Collaborative investigation of remote sensing-based adjustment methods for the CBRFC operational hydrologic forecasting environment will continue over the next several years.

  15. Error-preceding brain activity reflects (mal-)adaptive adjustments of cognitive control: a modeling study.

    Science.gov (United States)

    Steinhauser, Marco; Eichele, Heike; Juvodden, Hilde T; Huster, Rene J; Ullsperger, Markus; Eichele, Tom

    2012-01-01

    Errors in choice tasks are preceded by gradual changes in brain activity presumably related to fluctuations in cognitive control that promote the occurrence of errors. In the present paper, we use connectionist modeling to explore the hypothesis that these fluctuations reflect (mal-)adaptive adjustments of cognitive control. We considered ERP data from a study in which the probability of conflict in an Eriksen-flanker task was manipulated in sub-blocks of trials. Errors in these data were preceded by a gradual decline of N2 amplitude. After fitting a connectionist model of conflict adaptation to the data, we analyzed simulated N2 amplitude, simulated response times (RTs), and stimulus history preceding errors in the model, and found that the model produced the same pattern as obtained in the empirical data. Moreover, this pattern is not found in alternative models in which cognitive control varies randomly or in an oscillating manner. Our simulations suggest that the decline of N2 amplitude preceding errors reflects an increasing adaptation of cognitive control to specific task demands, which leads to an error when these task demands change. Taken together, these results provide evidence that error-preceding brain activity can reflect adaptive adjustments rather than unsystematic fluctuations of cognitive control, and therefore, that these errors are actually a consequence of the adaptiveness of human cognition.

  16. A software for parameter estimation in dynamic models

    Directory of Open Access Journals (Sweden)

    M. Yuceer

    2008-12-01

    Full Text Available A common problem in dynamic systems is to determine parameters in an equation used to represent experimental data. The goal is to determine the values of model parameters that provide the best fit to measured data, generally based on some type of least squares or maximum likelihood criterion. In the most general case, this requires the solution of a nonlinear and frequently non-convex optimization problem. Some of the available software lack in generality, while others do not provide ease of use. A user-interactive parameter estimation software was needed for identifying kinetic parameters. In this work we developed an integration based optimization approach to provide a solution to such problems. For easy implementation of the technique, a parameter estimation software (PARES has been developed in MATLAB environment. When tested with extensive example problems from literature, the suggested approach is proven to provide good agreement between predicted and observed data within relatively less computing time and iterations.

  17. Condition Parameter Modeling for Anomaly Detection in Wind Turbines

    Directory of Open Access Journals (Sweden)

    Yonglong Yan

    2014-05-01

    Full Text Available Data collected from the supervisory control and data acquisition (SCADA system, used widely in wind farms to obtain operational and condition information about wind turbines (WTs, is of important significance for anomaly detection in wind turbines. The paper presents a novel model for wind turbine anomaly detection mainly based on SCADA data and a back-propagation neural network (BPNN for automatic selection of the condition parameters. The SCADA data sets are determined through analysis of the cumulative probability distribution of wind speed and the relationship between output power and wind speed. The automatic BPNN-based parameter selection is for reduction of redundant parameters for anomaly detection in wind turbines. Through investigation of cases of WT faults, the validity of the automatic parameter selection-based model for WT anomaly detection is verified.

  18. Parameter Estimation of Photovoltaic Models via Cuckoo Search

    Directory of Open Access Journals (Sweden)

    Jieming Ma

    2013-01-01

    Full Text Available Since conventional methods are incapable of estimating the parameters of Photovoltaic (PV models with high accuracy, bioinspired algorithms have attracted significant attention in the last decade. Cuckoo Search (CS is invented based on the inspiration of brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior. In this paper, a CS-based parameter estimation method is proposed to extract the parameters of single-diode models for commercial PV generators. Simulation results and experimental data show that the CS algorithm is capable of obtaining all the parameters with extremely high accuracy, depicted by a low Root-Mean-Squared-Error (RMSE value. The proposed method outperforms other algorithms applied in this study.

  19. Parameter Estimation for Single Diode Models of Photovoltaic Modules

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Photovoltaic and Distributed Systems Integration Dept.

    2015-03-01

    Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.

  20. Automatic Determination of the Conic Coronal Mass Ejection Model Parameters

    Science.gov (United States)

    Pulkkinen, A.; Oates, T.; Taktakishvili, A.

    2009-01-01

    Characterization of the three-dimensional structure of solar transients using incomplete plane of sky data is a difficult problem whose solutions have potential for societal benefit in terms of space weather applications. In this paper transients are characterized in three dimensions by means of conic coronal mass ejection (CME) approximation. A novel method for the automatic determination of cone model parameters from observed halo CMEs is introduced. The method uses both standard image processing techniques to extract the CME mass from white-light coronagraph images and a novel inversion routine providing the final cone parameters. A bootstrap technique is used to provide model parameter distributions. When combined with heliospheric modeling, the cone model parameter distributions will provide direct means for ensemble predictions of transient propagation in the heliosphere. An initial validation of the automatic method is carried by comparison to manually determined cone model parameters. It is shown using 14 halo CME events that there is reasonable agreement, especially between the heliocentric locations of the cones derived with the two methods. It is argued that both the heliocentric locations and the opening half-angles of the automatically determined cones may be more realistic than those obtained from the manual analysis

  1. Estimation of the parameters of ETAS models by Simulated Annealing

    OpenAIRE

    Lombardi, Anna Maria

    2015-01-01

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is...

  2. CADLIVE optimizer: web-based parameter estimation for dynamic models

    Directory of Open Access Journals (Sweden)

    Inoue Kentaro

    2012-08-01

    Full Text Available Abstract Computer simulation has been an important technique to capture the dynamics of biochemical networks. In most networks, however, few kinetic parameters have been measured in vivo because of experimental complexity. We develop a kinetic parameter estimation system, named the CADLIVE Optimizer, which comprises genetic algorithms-based solvers with a graphical user interface. This optimizer is integrated into the CADLIVE Dynamic Simulator to attain efficient simulation for dynamic models.

  3. Reference physiological parameters for pharmacodynamic modeling of liver cancer

    Energy Technology Data Exchange (ETDEWEB)

    Travis, C.C.; Arms, A.D.

    1988-01-01

    This document presents a compilation of measured values for physiological parameters used in pharamacodynamic modeling of liver cancer. The physiological parameters include body weight, liver weight, the liver weight/body weight ratio, and number of hepatocytes. Reference values for use in risk assessment are given for each of the physiological parameters based on analyses of valid measurements taken from the literature and other reliable sources. The proposed reference values for rodents include sex-specific measurements for the B6C3F{sub 1}, mice and Fishcer 344/N, Sprague-Dawley, and Wistar rats. Reference values are also provided for humans. 102 refs., 65 tabs.

  4. Uncertainty of Modal Parameters Estimated by ARMA Models

    DEFF Research Database (Denmark)

    Jensen, Jacob Laigaard; Brincker, Rune; Rytter, Anders

    1990-01-01

    In this paper the uncertainties of identified modal parameters such as eidenfrequencies and damping ratios are assed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the parameters...... by simulation study of a lightly damped single degree of freedom system. Identification by ARMA models has been choosen as system identification method. It is concluded that both the sampling interval and number of sampled points may play a significant role with respect to the statistical errors. Furthermore...

  5. X-Parameter Based Modelling of Polar Modulated Power Amplifiers

    DEFF Research Database (Denmark)

    Wang, Yelin; Nielsen, Troels Studsgaard; Sira, Daniel

    2013-01-01

    X-parameters are developed as an extension of S-parameters capable of modelling non-linear devices driven by large signals. They are suitable for devices having only radio frequency (RF) and DC ports. In a polar power amplifier (PA), phase and envelope of the input modulated signal are applied...... at separate ports and the envelope port is neither an RF nor a DC port. As a result, X-parameters may fail to characterise the effect of the envelope port excitation and consequently the polar PA. This study introduces a solution to the problem for a commercial polar PA. In this solution, the RF-phase path...

  6. A Bayesian framework for parameter estimation in dynamical models.

    Directory of Open Access Journals (Sweden)

    Flávio Codeço Coelho

    Full Text Available Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.

  7. Modelling of Water Turbidity Parameters in a Water Treatment Plant

    Directory of Open Access Journals (Sweden)

    A. S. KOVO

    2005-01-01

    Full Text Available The high cost of chemical analysis of water has necessitated various researches into finding alternative method of determining portable water quality. This paper is aimed at modelling the turbidity value as a water quality parameter. Mathematical models for turbidity removal were developed based on the relationships between water turbidity and other water criteria. Results showed that the turbidity of water is the cumulative effect of the individual parameters/factors affecting the system. A model equation for the evaluation and prediction of a clarifier’s performance was developed:Model: T = T0(-1.36729 + 0.037101∙10λpH + 0.048928t + 0.00741387∙alkThe developed model will aid the predictive assessment of water treatment plant performance. The limitations of the models are as a result of insufficient variable considered during the conceptualization.

  8. Simultaneous estimation of parameters in the bivariate Emax model.

    Science.gov (United States)

    Magnusdottir, Bergrun T; Nyquist, Hans

    2015-12-10

    In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation.

  9. Shape parameter estimate for a glottal model without time position

    OpenAIRE

    Degottex, Gilles; Roebel, Axel; Rodet, Xavier

    2009-01-01

    cote interne IRCAM: Degottex09a; None / None; National audience; From a recorded speech signal, we propose to estimate a shape parameter of a glottal model without estimating his time position. Indeed, the literature usually propose to estimate the time position first (ex. by detecting Glottal Closure Instants). The vocal-tract filter estimate is expressed as a minimum-phase envelope estimation after removing the glottal model and a standard lips radiation model. Since this filter is mainly b...

  10. Light-Front Spin-1 Model: Parameters Dependence

    CERN Document Server

    Mello, Clayton S; de Melo, J P B C; Frederico, T

    2015-01-01

    We study the structure of the $\\rho$-meson within a light-front model with constituent quark degrees of freedom. We calculate electroweak static observables: magnetic and quadrupole moments, decay constant and charge radius. The prescription used to compute the electroweak quantities is free of zero modes, which makes the calculation implicitly covariant. We compare the results of our model with other ones found in the literature. Our model parameters give a decay constant close to the experimental one.

  11. Cosmological Models with Variable Deceleration Parameter in Lyra's Manifold

    CERN Document Server

    Pradhan, A; Singh, C B

    2006-01-01

    FRW models of the universe have been studied in the cosmological theory based on Lyra's manifold. A new class of exact solutions has been obtained by considering a time dependent displacement field for variable deceleration parameter from which three models of the universe are derived (i) exponential (ii) polynomial and (iii) sinusoidal form respectively. The behaviour of these models of the universe are also discussed. Finally some possibilities of further problems and their investigations have been pointed out.

  12. Controlling fractional order chaotic systems based on Takagi-Sugeno fuzzy model and adaptive adjustment mechanism

    Energy Technology Data Exchange (ETDEWEB)

    Zheng Yongai, E-mail: zhengyongai@163.co [Department of Computer, Yangzhou University, Yangzhou, 225009 (China); Nian Yibei [School of Energy and Power Engineering, Yangzhou University, Yangzhou, 225009 (China); Wang Dejin [Department of Computer, Yangzhou University, Yangzhou, 225009 (China)

    2010-12-01

    In this Letter, a kind of novel model, called the generalized Takagi-Sugeno (T-S) fuzzy model, is first developed by extending the conventional T-S fuzzy model. Then, a simple but efficient method to control fractional order chaotic systems is proposed using the generalized T-S fuzzy model and adaptive adjustment mechanism (AAM). Sufficient conditions are derived to guarantee chaos control from the stability criterion of linear fractional order systems. The proposed approach offers a systematic design procedure for stabilizing a large class of fractional order chaotic systems from the literature about chaos research. The effectiveness of the approach is tested on fractional order Roessler system and fractional order Lorenz system.

  13. Family support and acceptance, gay male identity formation, and psychological adjustment: a path model.

    Science.gov (United States)

    Elizur, Y; Ziv, M

    2001-01-01

    While heterosexist family undermining has been demonstrated to be a developmental risk factor in the life of persons with same-gender orientation, the issue of protective family factors is both controversial and relatively neglected. In this study of Israeli gay males (N = 114), we focused on the interrelations of family support, family acceptance and family knowledge of gay orientation, and gay male identity formation, and their effects on mental health and self-esteem. A path model was proposed based on the hypotheses that family support, family acceptance, family knowledge, and gay identity formation have an impact on psychological adjustment, and that family support has an effect on gay identity formation that is mediated by family acceptance. The assessment of gay identity formation was based on an established stage model that was streamlined for cross-cultural practice by defining three basic processes of same-gender identity formation: self-definition, self-acceptance, and disclosure (Elizur & Mintzer, 2001). The testing of our conceptual path model demonstrated an excellent fit with the data. An alternative model that hypothesized effects of gay male identity on family acceptance and family knowledge did not fit the data. Interpreting these results, we propose that the main effect of family support/acceptance on gay identity is related to the process of disclosure, and that both general family support and family acceptance of same-gender orientation play a significant role in the psychological adjustment of gay men.

  14. Optimization of parameters for maximization of plateletpheresis and lymphocytapheresis yields on the Haemonetics Model V50.

    Science.gov (United States)

    AuBuchon, J P; Carter, C S; Adde, M A; Meyer, D R; Klein, H G

    1986-01-01

    Automated apheresis techniques afford the opportunity of tailoring collection parameters for each donor's hematologic profile. This study investigated the effect of various settings of the volume offset parameter as utilized in the Haemonetics Model V50 instrumentation during platelet- and lymphocytapheresis to optimize product yield, purity, and collection efficiency. In both types of procedures, increased product yield could be obtained by using an increased volume offset for donors having lower hematocrits. This improvement was related to an increase in collection efficiency. Platelet products also contained fewer contaminating lymphocytes with this approach. Adjustment of the volume offset parameter can be utilized to make the most efficient use of donors and provide higher-quality products.

  15. Identification of slow molecular order parameters for Markov model construction

    CERN Document Server

    Perez-Hernandez, Guillermo; Giorgino, Toni; de Fabritiis, Gianni; Noé, Frank

    2013-01-01

    A goal in the kinetic characterization of a macromolecular system is the description of its slow relaxation processes, involving (i) identification of the structural changes involved in these processes, and (ii) estimation of the rates or timescales at which these slow processes occur. Most of the approaches to this task, including Markov models, Master-equation models, and kinetic network models, start by discretizing the high-dimensional state space and then characterize relaxation processes in terms of the eigenvectors and eigenvalues of a discrete transition matrix. The practical success of such an approach depends very much on the ability to finely discretize the slow order parameters. How can this task be achieved in a high-dimensional configuration space without relying on subjective guesses of the slow order parameters? In this paper, we use the variational principle of conformation dynamics to derive an optimal way of identifying the "slow subspace" of a large set of prior order parameters - either g...

  16. Solar Model Parameters and Direct Measurements of Solar Neutrino Fluxes

    CERN Document Server

    Bandyopadhyay, A; Goswami, S; Petcov, S T; Bandyopadhyay, Abhijit; Choubey, Sandhya; Goswami, Srubabati

    2006-01-01

    We explore a novel possibility of determining the solar model parameters, which serve as input in the calculations of the solar neutrino fluxes, by exploiting the data from direct measurements of the fluxes. More specifically, we use the rather precise value of the $^8B$ neutrino flux, $\\phi_B$ obtained from the global analysis of the solar neutrino and KamLAND data, to derive constraints on each of the solar model parameters on which $\\phi_B$ depends. We also use more precise values of $^7Be$ and $pp$ fluxes as can be obtained from future prospective data and discuss whether such measurements can help in reducing the uncertainties of one or more input parameters of the Standard Solar Model.

  17. IP-Sat: Impact-Parameter dependent Saturation model; revised

    CERN Document Server

    Rezaeian, Amir H; Van de Klundert, Merijn; Venugopalan, Raju

    2013-01-01

    In this talk, we present a global analysis of available small-x data on inclusive DIS and exclusive diffractive processes, including the latest data from the combined HERA analysis on reduced cross sections within the Impact-Parameter dependent Saturation (IP-Sat) Model. The impact-parameter dependence of dipole amplitude is crucial in order to have a unified description of both inclusive and exclusive diffractive processes. With the parameters of model fixed via a fit to the high-precision reduced cross-section, we compare model predictions to data for the structure functions, the longitudinal structure function, the charm structure function, exclusive vector mesons production and Deeply Virtual Compton Scattering (DVCS). Excellent agreement is obtained for the processes considered at small x in a wide range of Q^2.

  18. QCD-inspired determination of NJL model parameters

    CERN Document Server

    Springer, Paul; Rechenberger, Stefan; Rennecke, Fabian

    2016-01-01

    The QCD phase diagram at finite temperature and density has attracted considerable interest over many decades now, not least because of its relevance for a better understanding of heavy-ion collision experiments. Models provide some insight into the QCD phase structure but usually rely on various parameters. Based on renormalization group arguments, we discuss how the parameters of QCD low-energy models can be determined from the fundamental theory of the strong interaction. We particularly focus on a determination of the temperature dependence of these parameters in this work and comment on the effect of a finite quark chemical potential. We present first results and argue that our findings can be used to improve the predictive power of future model calculations.

  19. SPOTting model parameters using a ready-made Python package

    Science.gov (United States)

    Houska, Tobias; Kraft, Philipp; Breuer, Lutz

    2015-04-01

    The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for

  20. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    Science.gov (United States)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  1. Modelling of intermittent microwave convective drying: parameter sensitivity

    Directory of Open Access Journals (Sweden)

    Zhang Zhijun

    2017-06-01

    Full Text Available The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  2. Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations

    Science.gov (United States)

    Hanson, Andrea; Reed, Erik; Cavanagh, Peter

    2011-01-01

    Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.

  3. Modelling of intermittent microwave convective drying: parameter sensitivity

    Science.gov (United States)

    Zhang, Zhijun; Qin, Wenchao; Shi, Bin; Gao, Jingxin; Zhang, Shiwei

    2017-06-01

    The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  4. Comparing spatial and temporal transferability of hydrological model parameters

    Science.gov (United States)

    Patil, Sopan; Stieglitz, Marc

    2015-04-01

    Operational use of hydrological models requires the transfer of calibrated parameters either in time (for streamflow forecasting) or space (for prediction at ungauged catchments) or both. Although the effects of spatial and temporal parameter transfer on catchment streamflow predictions have been well studied individually, a direct comparison of these approaches is much less documented. In our view, such comparison is especially pertinent in the context of increasing appeal and popularity of the "trading space for time" approaches that are proposed for assessing the hydrological implications of anthropogenic climate change. Here, we compare three different schemes of parameter transfer, viz., temporal, spatial, and spatiotemporal, using a spatially lumped hydrological model called EXP-HYDRO at 294 catchments across the continental United States. Results show that the temporal parameter transfer scheme performs best, with lowest decline in prediction performance (median decline of 4.2%) as measured using the Kling-Gupta efficiency metric. More interestingly, negligible difference in prediction performance is observed between the spatial and spatiotemporal parameter transfer schemes (median decline of 12.4% and 13.9% respectively). We further demonstrate that the superiority of temporal parameter transfer scheme is preserved even when: (1) spatial distance between donor and receiver catchments is reduced, or (2) temporal lag between calibration and validation periods is increased. Nonetheless, increase in the temporal lag between calibration and validation periods reduces the overall performance gap between the three parameter transfer schemes. Results suggest that spatiotemporal transfer of hydrological model parameters has the potential to be a viable option for climate change related hydrological studies, as envisioned in the "trading space for time" framework. However, further research is still needed to explore the relationship between spatial and temporal

  5. The relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards: A Monte Carlo study

    Science.gov (United States)

    Austin, Peter C.; Reeves, Mathew J.

    2015-01-01

    Background Hospital report cards, in which outcomes following the provision of medical or surgical care are compared across health care providers, are being published with increasing frequency. Essential to the production of these reports is risk-adjustment, which allows investigators to account for differences in the distribution of patient illness severity across different hospitals. Logistic regression models are frequently used for risk-adjustment in hospital report cards. Many applied researchers use the c-statistic (equivalent to the area under the receiver operating characteristic curve) of the logistic regression model as a measure of the credibility and accuracy of hospital report cards. Objectives To determine the relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards. Research Design Monte Carlo simulations were used to examine this issue. We examined the influence of three factors on the accuracy of hospital report cards: the c-statistic of the logistic regression model used for risk-adjustment, the number of hospitals, and the number of patients treated at each hospital. The parameters used to generate the simulated datasets came from analyses of patients hospitalized with a diagnosis of acute myocardial infarction in Ontario, Canada. Results The c-statistic of the risk-adjustment model had, at most, a very modest impact on the accuracy of hospital report cards, whereas the number of patients treated at each hospital had a much greater impact. Conclusions The c-statistic of a risk-adjustment model should not be used to assess the accuracy of a hospital report card. PMID:23295579

  6. Estimation of the parameters of ETAS models by Simulated Annealing

    Science.gov (United States)

    Lombardi, Anna Maria

    2015-02-01

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.

  7. J-A Hysteresis Model Parameters Estimation using GA

    Directory of Open Access Journals (Sweden)

    Bogomir Zidaric

    2005-01-01

    Full Text Available This paper presents the Jiles and Atherton (J-A hysteresis model parameter estimation for soft magnetic composite (SMC material. The calculation of Jiles and Atherton hysteresis model parameters is based on experimental data and genetic algorithms (GA. Genetic algorithms operate in a given area of possible solutions. Finding the best solution of a problem in wide area of possible solutions is uncertain. A new approach in use of genetic algorithms is proposed to overcome this uncertainty. The basis of this approach is in genetic algorithm built in another genetic algorithm.

  8. A new estimate of the parameters in linear mixed models

    Institute of Scientific and Technical Information of China (English)

    王松桂; 尹素菊

    2002-01-01

    In linear mixed models, there are two kinds of unknown parameters: one is the fixed effect, theother is the variance component. In this paper, new estimates of these parameters, called the spectral decom-position estimates, are proposed, Some important statistical properties of the new estimates are established,in particular the linearity of the estimates of the fixed effects with many statistical optimalities. A new methodis applied to two important models which are used in economics, finance, and mechanical fields. All estimatesobtained have good statistical and practical meaning.

  9. Models wagging the dog: are circuits constructed with disparate parameters?

    Science.gov (United States)

    Nowotny, Thomas; Szücs, Attila; Levi, Rafael; Selverston, Allen I

    2007-08-01

    In a recent article, Prinz, Bucher, and Marder (2004) addressed the fundamental question of whether neural systems are built with a fixed blueprint of tightly controlled parameters or in a way in which properties can vary largely from one individual to another, using a database modeling approach. Here, we examine the main conclusion that neural circuits indeed are built with largely varying parameters in the light of our own experimental and modeling observations. We critically discuss the experimental and theoretical evidence, including the general adequacy of database approaches for questions of this kind, and come to the conclusion that the last word for this fundamental question has not yet been spoken.

  10. Do land parameters matter in large-scale hydrological modelling?

    Science.gov (United States)

    Gudmundsson, Lukas; Seneviratne, Sonia I.

    2013-04-01

    Many of the most pending issues in large-scale hydrology are concerned with predicting hydrological variability at ungauged locations. However, current-generation hydrological and land surface models that are used for their estimation suffer from large uncertainties. These models rely on mathematical approximations of the physical system as well as on mapped values of land parameters (e.g. topography, soil types, land cover) to predict hydrological variables (e.g. evapotranspiration, soil moisture, stream flow) as a function of atmospheric forcing (e.g. precipitation, temperature, humidity). Despite considerable progress in recent years, it remains unclear whether better estimates of land parameters can improve predictions - or - if a refinement of model physics is necessary. To approach this question we suggest scrutinizing our perception of hydrological systems by confronting it with the radical assumption that hydrological variability at any location in space depends on past and present atmospheric forcing only, and not on location-specific land parameters. This so called "Constant Land Parameter Hypothesis (CLPH)" assumes that variables like runoff can be predicted without taking location specific factors such as topography or soil types into account. We demonstrate, using a modern statistical tool, that monthly runoff in Europe can be skilfully estimated using atmospheric forcing alone, without accounting for locally varying land parameters. The resulting runoff estimates are used to benchmark state-of-the-art process models. These are found to have inferior performance, despite their explicit process representation, which accounts for locally varying land parameters. This suggests that progress in the theory of hydrological systems is likely to yield larger improvements in model performance than more precise land parameter estimates. The results also question the current modelling paradigm that is dominated by the attempt to account for locally varying land

  11. An adjusting model of sustainable development of Jiangsu province%江苏省可持续发展系统调控试验模式及分析

    Institute of Scientific and Technical Information of China (English)

    凌亢; 陈传美

    2003-01-01

    This paper designed a naturally evolution model to simulate the social and economic development of Jiangsu province. Then some parameters in the model were changed and some development patterns were obtained. All patterns were incorporated to design an adjusting model of SuStainable development of Jiangsu province.

  12. Droop Control with an Adjustable Complex Virtual Impedance Loop based on Cloud Model Theory

    DEFF Research Database (Denmark)

    Li, Yan; Shuai, Zhikang; Xu, Qinming

    2016-01-01

    not only can avoid the active/reactive power coupling, but also it may reduce the output voltage drop of the PCC voltage. The proposed adjustable complex virtual impedance loop is putted into the conventional P/Q droop control to overcome the difficulty of getting the line impedance, which may change...... sometimes. The cloud model theory is applied to get online the changing line impedance value, which relies on the relevance of the reactive power responding the changing line impedance. The verification of the proposed control strategy is done according to the simulation in a low voltage microgrid in Matlab....

  13. Risk Adjustment for Determining Surgical Site Infection in Colon Surgery: Are All Models Created Equal?

    Science.gov (United States)

    Muratore, Sydne; Statz, Catherine; Glover, J J; Kwaan, Mary; Beilman, Greg

    2016-04-01

    Colon surgical site infections (SSIs) are being utilized increasingly as a quality measure for hospital reimbursement and public reporting. The Centers for Medicare and Medicaid Services (CMS) now require reporting of colon SSI, which is entered through the U.S. Centers for Disease Control and Prevention's National Healthcare Safety Network (NHSN). However, the CMS's model for determining expected SSIs uses different risk adjustment variables than does NHSN. We hypothesize that CMS's colon SSI model will predict lower expected infection rates than will NHSN. Colon SSI data were reported prospectively to NHSN from 2012-2014 for the six Fairview Hospitals (1,789 colon procedures). We compared expected quarterly SSIs and standardized infection ratios (SIRs) generated by CMS's risk-adjustment model (age and American Society of Anesthesiologist [ASA] classification) vs. NHSN's (age, ASA classification, procedure duration, endoscope [including laparoscope] use, medical school affiliation, hospital bed number, and incision class). The patients with more complex colon SSIs were more likely to be male (60% vs. 44%; p = 0.011), to have contaminated/dirty incisions (21% vs. 10%; p = 0.005), and to have longer operations (235 min vs. 156 min; p < 0.001) and were more likely to be at a medical school-affiliated hospital (53% vs. 40%; p = 0.032). For Fairview Hospitals combined, CMS calculated a lower number of expected quarterly SSIs than did the NHSN (4.58 vs. 5.09 SSIs/quarter; p = 0.002). This difference persisted in a university hospital (727 procedures; 2.08 vs. 2.33; p = 0.002) and a smaller, community-based hospital (565 procedures; 1.31 vs. 1.42; p = 0.002). There were two quarters in which CMS identified Fairview's SIR as an outlier for complex colon SSIs (p = 0.05 and 0.04), whereas NHSN did not (p = 0.06 and 0.06). The CMS's current risk-adjustment model using age and ASA classification predicts lower rates of expected colon

  14. Model Validation for Shipboard Power Cables Using Scattering Parameters%Model Validation for Shipboard Power Cables Using Scattering Parameters

    Institute of Scientific and Technical Information of China (English)

    Lukas Graber; Diomar Infante; Michael Steurer; William W. Brey

    2011-01-01

    Careful analysis of transients in shipboard power systems is important to achieve long life times of the com ponents in future all-electric ships. In order to accomplish results with high accuracy, it is recommended to validate cable models as they have significant influence on the amplitude and frequency spectrum of voltage transients. The authors propose comparison of model and measurement using scattering parameters. They can be easily obtained from measurement and simulation and deliver broadband information about the accuracy of the model. The measurement can be performed using a vector network analyzer. The process to extract scattering parameters from simulation models is explained in detail. Three different simulation models of a 5 kV XLPE power cable have been validated. The chosen approach delivers an efficient tool to quickly estimate the quality of a model.

  15. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2006-06-05

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This

  16. Considerations for parameter optimization and sensitivity in climate models.

    Science.gov (United States)

    Neelin, J David; Bracco, Annalisa; Luo, Hao; McWilliams, James C; Meyerson, Joyce E

    2010-12-14

    Climate models exhibit high sensitivity in some respects, such as for differences in predicted precipitation changes under global warming. Despite successful large-scale simulations, regional climatology features prove difficult to constrain toward observations, with challenges including high-dimensionality, computationally expensive simulations, and ambiguity in the choice of objective function. In an atmospheric General Circulation Model forced by observed sea surface temperature or coupled to a mixed-layer ocean, many climatic variables yield rms-error objective functions that vary smoothly through the feasible parameter range. This smoothness occurs despite nonlinearity strong enough to reverse the curvature of the objective function in some parameters, and to imply limitations on multimodel ensemble means as an estimator of global warming precipitation changes. Low-order polynomial fits to the model output spatial fields as a function of parameter (quadratic in model field, fourth-order in objective function) yield surprisingly successful metamodels for many quantities and facilitate a multiobjective optimization approach. Tradeoffs arise as optima for different variables occur at different parameter values, but with agreement in certain directions. Optima often occur at the limit of the feasible parameter range, identifying key parameterization aspects warranting attention--here the interaction of convection with free tropospheric water vapor. Analytic results for spatial fields of leading contributions to the optimization help to visualize tradeoffs at a regional level, e.g., how mismatches between sensitivity and error spatial fields yield regional error under minimization of global objective functions. The approach is sufficiently simple to guide parameter choices and to aid intercomparison of sensitivity properties among climate models.

  17. Towards individualized dose constraints: Adjusting the QUANTEC radiation pneumonitis model for clinical risk factors

    DEFF Research Database (Denmark)

    Appelt, Ane L; Vogelius, Ivan R.; Farr, Katherina P.

    2014-01-01

    Background. Understanding the dose-response of the lung in order to minimize the risk of radiation pneumonitis (RP) is critical for optimization of lung cancer radiotherapy. We propose a method to combine the dose-response relationship for RP in the landmark QUANTEC paper with known clinical risk......-only QUANTEC model and the model including risk factors. Subdistribution cumulative incidence functions were compared for patients with high/low-risk predictions from the two models, and concordance indices (c-indices) for the prediction of RP were calculated. Results. The reference dose- response relationship...... factors, in order to enable individual risk prediction. The approach is validated in an independent dataset. Material and methods. The prevalence of risk factors in the patient populations underlying the QUANTEC analysis was estimated, and a previously published method to adjust dose...

  18. Uncertainty of Modal Parameters Estimated by ARMA Models

    DEFF Research Database (Denmark)

    Jensen, Jakob Laigaard; Brincker, Rune; Rytter, Anders

    In this paper the uncertainties of identified modal parameters such as eigenfrequencies and damping ratios are assessed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the param......In this paper the uncertainties of identified modal parameters such as eigenfrequencies and damping ratios are assessed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty...... by a simulation study of a lightly damped single degree of freedom system. Identification by ARMA models has been chosen as system identification method. It is concluded that both the sampling interval and number of sampled points may play a significant role with respect to the statistical errors. Furthermore...

  19. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    Science.gov (United States)

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  20. A Generic Model for Relative Adjustment Between Optical Sensors Using Rigorous Orbit Mechanics

    Directory of Open Access Journals (Sweden)

    B. Islam

    2008-06-01

    Full Text Available The classical calibration or space resection is the fundamental task in photogrammetry. The lack of sufficient knowledge of interior and exterior orientation parameters lead to unreliable results in the photogrammetric process. One of the earliest in approaches using in photogrammetry was the plumb line calibration method. This method is suitable to recover the radial and decentering lens distortion coefficients, while the remaining interior(focal length and principal point coordinates and exterior orientation parameters have to be determined by a complimentary method. As the lens distortion remains very less it not considered as the interior orientation parameters, in the present rigorous sensor model. There are several other available methods based on the photogrammetric collinearity equations, which consider the determination of exterior orientation parameters, with no mention to the simultaneous determination of inner orientation parameters. Normal space resection methods solve the problem using control points, whose coordinates are known both in image and object reference systems. The non-linearity of the model and the problems, in point location in digital images and identifying the maximum GPS measured control points are the main drawbacks of the classical approaches. This paper addresses mathematical model based on the fundamental assumption of collineariy of three points of two Along-Track Stereo imagery sensors and independent object point. Assuming this condition it is possible to extract the exterior orientation (EO parameters for a long strip and single image together, without and with using the control points. Moreover, after extracting the EO parameters the accuracy for satellite data products are compared in with using single and with no control points.

  1. Iterative integral parameter identification of a respiratory mechanics model

    Directory of Open Access Journals (Sweden)

    Schranz Christoph

    2012-07-01

    Full Text Available Abstract Background Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual’s model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. Methods An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS patients. Results The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. Conclusion These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.

  2. A pressure consistent bridge correction of Kovalenko-Hirata closure in Ornstein-Zernike theory for Lennard-Jones fluids by apparently adjusting sigma parameter

    Directory of Open Access Journals (Sweden)

    Yuki Ebato

    2016-05-01

    Full Text Available Ornstein-Zernike (OZ integral equation theory is known to overestimate the excess internal energy, Uex, pressure through the virial route, Pv, and excess chemical potential, μex, for one-component Lennard-Jones (LJ fluids under hypernetted chain (HNC and Kovalenko-Hirata (KH approximatons. As one of the bridge correction methods to improve the precision of these thermodynamic quantities, it was shown in our previous paper that the method to apparently adjust σ parameter in the LJ potential is effective [T. Miyata and Y. Ebato, J. Molec. Liquids. 217, 75 (2016]. In our previous paper, we evaluated the actual variation in the σ parameter by using a fitting procedure to molecular dynamics (MD results. In this article, we propose an alternative method to determine the actual variation in the σ parameter. The proposed method utilizes a condition that the virial and compressibility pressures coincide with each other. This method can correct OZ theory without a fitting procedure to MD results, and possesses characteristics of keeping a form of HNC and/or KH closure. We calculate the radial distribution function, pressure, excess internal energy, and excess chemical potential for one-component LJ fluids to check the performance of our proposed bridge function. We discuss the precision of these thermodynamic quantities by comparing with MD results. In addition, we also calculate a corrected gas-liquid coexistence curve based on a corrected KH-type closure and compare it with MD results.

  3. A pressure consistent bridge correction of Kovalenko-Hirata closure in Ornstein-Zernike theory for Lennard-Jones fluids by apparently adjusting sigma parameter

    Science.gov (United States)

    Ebato, Yuki; Miyata, Tatsuhiko

    2016-05-01

    Ornstein-Zernike (OZ) integral equation theory is known to overestimate the excess internal energy, Uex, pressure through the virial route, Pv, and excess chemical potential, μex, for one-component Lennard-Jones (LJ) fluids under hypernetted chain (HNC) and Kovalenko-Hirata (KH) approximatons. As one of the bridge correction methods to improve the precision of these thermodynamic quantities, it was shown in our previous paper that the method to apparently adjust σ parameter in the LJ potential is effective [T. Miyata and Y. Ebato, J. Molec. Liquids. 217, 75 (2016)]. In our previous paper, we evaluated the actual variation in the σ parameter by using a fitting procedure to molecular dynamics (MD) results. In this article, we propose an alternative method to determine the actual variation in the σ parameter. The proposed method utilizes a condition that the virial and compressibility pressures coincide with each other. This method can correct OZ theory without a fitting procedure to MD results, and possesses characteristics of keeping a form of HNC and/or KH closure. We calculate the radial distribution function, pressure, excess internal energy, and excess chemical potential for one-component LJ fluids to check the performance of our proposed bridge function. We discuss the precision of these thermodynamic quantities by comparing with MD results. In addition, we also calculate a corrected gas-liquid coexistence curve based on a corrected KH-type closure and compare it with MD results.

  4. Estimation of growth parameters using a nonlinear mixed Gompertz model.

    Science.gov (United States)

    Wang, Z; Zuidhof, M J

    2004-06-01

    In order to maximize the utility of simulation models for decision making, accurate estimation of growth parameters and associated variances is crucial. A mixed Gompertz growth model was used to account for between-bird variation and heterogeneous variance. The mixed model had several advantages over the fixed effects model. The mixed model partitioned BW variation into between- and within-bird variation, and the covariance structure assumed with the random effect accounted for part of the BW correlation across ages in the same individual. The amount of residual variance decreased by over 55% with the mixed model. The mixed model reduced estimation biases that resulted from selective sampling. For analysis of longitudinal growth data, the mixed effects growth model is recommended.

  5. Modelling Biophysical Parameters of Maize Using Landsat 8 Time Series

    Science.gov (United States)

    Dahms, Thorsten; Seissiger, Sylvia; Conrad, Christopher; Borg, Erik

    2016-06-01

    Open and free access to multi-frequent high-resolution data (e.g. Sentinel - 2) will fortify agricultural applications based on satellite data. The temporal and spatial resolution of these remote sensing datasets directly affects the applicability of remote sensing methods, for instance a robust retrieving of biophysical parameters over the entire growing season with very high geometric resolution. In this study we use machine learning methods to predict biophysical parameters, namely the fraction of absorbed photosynthetic radiation (FPAR), the leaf area index (LAI) and the chlorophyll content, from high resolution remote sensing. 30 Landsat 8 OLI scenes were available in our study region in Mecklenburg-Western Pomerania, Germany. In-situ data were weekly to bi-weekly collected on 18 maize plots throughout the summer season 2015. The study aims at an optimized prediction of biophysical parameters and the identification of the best explaining spectral bands and vegetation indices. For this purpose, we used the entire in-situ dataset from 24.03.2015 to 15.10.2015. Random forest and conditional inference forests were used because of their explicit strong exploratory and predictive character. Variable importance measures allowed for analysing the relation between the biophysical parameters with respect to the spectral response, and the performance of the two approaches over the plant stock evolvement. Classical random forest regression outreached the performance of conditional inference forests, in particular when modelling the biophysical parameters over the entire growing period. For example, modelling biophysical parameters of maize for the entire vegetation period using random forests yielded: FPAR: R² = 0.85; RMSE = 0.11; LAI: R² = 0.64; RMSE = 0.9 and chlorophyll content (SPAD): R² = 0.80; RMSE=4.9. Our results demonstrate the great potential in using machine-learning methods for the interpretation of long-term multi-frequent remote sensing datasets to model

  6. Bayesian hierarchical models combining different study types and adjusting for covariate imbalances: a simulation study to assess model performance.

    Directory of Open Access Journals (Sweden)

    C Elizabeth McCarron

    Full Text Available BACKGROUND: Bayesian hierarchical models have been proposed to combine evidence from different types of study designs. However, when combining evidence from randomised and non-randomised controlled studies, imbalances in patient characteristics between study arms may bias the results. The objective of this study was to assess the performance of a proposed Bayesian approach to adjust for imbalances in patient level covariates when combining evidence from both types of study designs. METHODOLOGY/PRINCIPAL FINDINGS: Simulation techniques, in which the truth is known, were used to generate sets of data for randomised and non-randomised studies. Covariate imbalances between study arms were introduced in the non-randomised studies. The performance of the Bayesian hierarchical model adjusted for imbalances was assessed in terms of bias. The data were also modelled using three other Bayesian approaches for synthesising evidence from randomised and non-randomised studies. The simulations considered six scenarios aimed at assessing the sensitivity of the results to changes in the impact of the imbalances and the relative number and size of studies of each type. For all six scenarios considered, the Bayesian hierarchical model adjusted for differences within studies gave results that were unbiased and closest to the true value compared to the other models. CONCLUSIONS/SIGNIFICANCE: Where informed health care decision making requires the synthesis of evidence from randomised and non-randomised study designs, the proposed hierarchical Bayesian method adjusted for differences in patient characteristics between study arms may facilitate the optimal use of all available evidence leading to unbiased results compared to unadjusted analyses.

  7. Joint Dynamics Modeling and Parameter Identification for Space Robot Applications

    Directory of Open Access Journals (Sweden)

    Adenilson R. da Silva

    2007-01-01

    Full Text Available Long-term mission identification and model validation for in-flight manipulator control system in almost zero gravity with hostile space environment are extremely important for robotic applications. In this paper, a robot joint mathematical model is developed where several nonlinearities have been taken into account. In order to identify all the required system parameters, an integrated identification strategy is derived. This strategy makes use of a robust version of least-squares procedure (LS for getting the initial conditions and a general nonlinear optimization method (MCS—multilevel coordinate search—algorithm to estimate the nonlinear parameters. The approach is applied to the intelligent robot joint (IRJ experiment that was developed at DLR for utilization opportunity on the International Space Station (ISS. The results using real and simulated measurements have shown that the developed algorithm and strategy have remarkable features in identifying all the parameters with good accuracy.

  8. Mathematical Modelling and Parameter Optimization of Pulsating Heat Pipes

    CERN Document Server

    Yang, Xin-She; Luan, Tao; Koziel, Slawomir

    2014-01-01

    Proper heat transfer management is important to key electronic components in microelectronic applications. Pulsating heat pipes (PHP) can be an efficient solution to such heat transfer problems. However, mathematical modelling of a PHP system is still very challenging, due to the complexity and multiphysics nature of the system. In this work, we present a simplified, two-phase heat transfer model, and our analysis shows that it can make good predictions about startup characteristics. Furthermore, by considering parameter estimation as a nonlinear constrained optimization problem, we have used the firefly algorithm to find parameter estimates efficiently. We have also demonstrated that it is possible to obtain good estimates of key parameters using very limited experimental data.

  9. The influences of model parameters on the characteristics of memristors

    Institute of Scientific and Technical Information of China (English)

    Zhou Jing; Huang Da

    2012-01-01

    As the fourth passive circuit component,a memristor is a nonlinear resistor that can "remember" the amount of charge passing through it.The characteristic of "remembering" the charge and non-volatility makes memristors great potential candidates in many fields.Nowadays,only a few groups have the ability to fabricate memristors,and most researchers study them by theoretic analysis and simulation.In this paper,we first analyse the theoretical base and characteristics of memristors,then use a simulation program with integrated circuit emphasis as our tool to simulate the theoretical model of memristors and change the parameters in the model to see the influence of each parameter on the characteristics.Our work supplies researchers engaged in memristor-based circuits with advice on how to choose the proper parameters.

  10. Prediction of interest rate using CKLS model with stochastic parameters

    Energy Technology Data Exchange (ETDEWEB)

    Ying, Khor Chia [Faculty of Computing and Informatics, Multimedia University, Jalan Multimedia, 63100 Cyberjaya, Selangor (Malaysia); Hin, Pooi Ah [Sunway University Business School, No. 5, Jalan Universiti, Bandar Sunway, 47500 Subang Jaya, Selangor (Malaysia)

    2014-06-19

    The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ{sup (j)} of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ{sup (j)}, we assume that φ{sup (j)} depends on φ{sup (j−m)}, φ{sup (j−m+1)},…, φ{sup (j−1)} and the interest rate r{sub j+n} at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r{sub j+n+1} of the interest rate at the next time point when the value r{sub j+n} of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r{sub j+n+d} at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.

  11. Comparison of Parameter Estimation Methods for Transformer Weibull Lifetime Modelling

    Institute of Scientific and Technical Information of China (English)

    ZHOU Dan; LI Chengrong; WANG Zhongdong

    2013-01-01

    Two-parameter Weibull distribution is the most widely adopted lifetime model for power transformers.An appropriate parameter estimation method is essential to guarantee the accuracy of a derived Weibull lifetime model.Six popular parameter estimation methods (i.e.the maximum likelihood estimation method,two median rank regression methods including the one regressing X on Y and the other one regressing Y on X,the Kaplan-Meier method,the method based on cumulative hazard plot,and the Li's method) are reviewed and compared in order to find the optimal one that suits transformer's Weibull lifetime modelling.The comparison took several different scenarios into consideration:10 000 sets of lifetime data,each of which had a sampling size of 40 ~ 1 000 and a censoring rate of 90%,were obtained by Monte-Carlo simulations for each scienario.Scale and shape parameters of Weibull distribution estimated by the six methods,as well as their mean value,median value and 90% confidence band are obtained.The cross comparison of these results reveals that,among the six methods,the maximum likelihood method is the best one,since it could provide the most accurate Weibull parameters,i.e.parameters having the smallest bias in both mean and median values,as well as the shortest length of the 90% confidence band.The maximum likelihood method is therefore recommended to be used over the other methods in transformer Weibull lifetime modelling.

  12. Calculation of Thermodynamic Parameters for Freundlich and Temkin Isotherm Models

    Institute of Scientific and Technical Information of China (English)

    ZHANGZENGQIANG; ZHANGYIPING; 等

    1999-01-01

    Derivation of the Freundlich and Temkin isotherm models from the kinetic adsorption/desorption equations was carried out to calculate their thermodynamic equilibrium constants.The calculation formulase of three thermodynamic parameters,the standard molar Gibbs free energy change,the standard molar enthalpy change and the standard molar entropy change,of isothermal adsorption processes for Freundlich and Temkin isotherm models were deduced according to the relationship between the thermodynamic equilibrium constants and the temperature.

  13. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  14. Parabolic problems with parameters arising in evolution model for phytromediation

    Science.gov (United States)

    Sahmurova, Aida; Shakhmurov, Veli

    2012-12-01

    The past few decades, efforts have been made to clean sites polluted by heavy metals as chromium. One of the new innovative methods of eradicating metals from soil is phytoremediation. This uses plants to pull metals from the soil through the roots. This work develops a system of differential equations with parameters to model the plant metal interaction of phytoremediation (see [1]).

  15. Lumped-parameter Model of a Bucket Foundation

    DEFF Research Database (Denmark)

    Andersen, Lars; Ibsen, Lars Bo; Liingaard, Morten

    2009-01-01

    As an alternative to gravity footings or pile foundations, offshore wind turbines at shallow water can be placed on a bucket foundation. The present analysis concerns the development of consistent lumped-parameter models for this type of foundation. The aim is to formulate a computationally effic...

  16. Improved parameter estimation for hydrological models using weighted object functions

    NARCIS (Netherlands)

    Stein, A.; Zaadnoordijk, W.J.

    1999-01-01

    This paper discusses the sensitivity of calibration of hydrological model parameters to different objective functions. Several functions are defined with weights depending upon the hydrological background. These are compared with an objective function based upon kriging. Calibration is applied to pi

  17. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  18. PARAMETER ESTIMATION IN LINEAR REGRESSION MODELS FOR LONGITUDINAL CONTAMINATED DATA

    Institute of Scientific and Technical Information of China (English)

    QianWeimin; LiYumei

    2005-01-01

    The parameter estimation and the coefficient of contamination for the regression models with repeated measures are studied when its response variables are contaminated by another random variable sequence. Under the suitable conditions it is proved that the estimators which are established in the paper are strongly consistent estimators.

  19. Modeling and simulation of HTS cables for scattering parameter analysis

    Science.gov (United States)

    Bang, Su Sik; Lee, Geon Seok; Kwon, Gu-Young; Lee, Yeong Ho; Chang, Seung Jin; Lee, Chun-Kwon; Sohn, Songho; Park, Kijun; Shin, Yong-June

    2016-11-01

    Most of modeling and simulation of high temperature superconducting (HTS) cables are inadequate for high frequency analysis since focus of the simulation's frequency is fundamental frequency of the power grid, which does not reflect transient characteristic. However, high frequency analysis is essential process to research the HTS cables transient for protection and diagnosis of the HTS cables. Thus, this paper proposes a new approach for modeling and simulation of HTS cables to derive the scattering parameter (S-parameter), an effective high frequency analysis, for transient wave propagation characteristics in high frequency range. The parameters sweeping method is used to validate the simulation results to the measured data given by a network analyzer (NA). This paper also presents the effects of the cable-to-NA connector in order to minimize the error between the simulated and the measured data under ambient and superconductive conditions. Based on the proposed modeling and simulation technique, S-parameters of long-distance HTS cables can be accurately derived in wide range of frequency. The results of proposed modeling and simulation can yield the characteristics of the HTS cables and will contribute to analyze the HTS cables.

  20. Dynamic Air-Route Adjustments - Model,Algorithm,and Sensitivity Analysis

    Institute of Scientific and Technical Information of China (English)

    GENG Rui; CHENG Peng; CUI Deguang

    2009-01-01

    Dynamic airspace management (DAM) is an important approach to extend limited air space resources by using them more efficiently and flexibly.This paper analyzes the use of the dynamic air-route adjustment (DARA) method as a core procedure in DAM systems.DARA method makes dynamic decisions on when and how to adjust the current air-route network with the minimum cost.This model differs from the air traffic flow management (ATFM) problem because it considers dynamic opening and closing of air-route segments instead of only arranging flights on a given air traffic network and it takes into account several new constraints,such as the shortest opening time constraint.The DARA problem is solved using a two-step heuristic algorithm.The sensitivities of important coefficients in the model are analyzed to determine proper values for these coefficients.The computational results based on practical data from the Beijing ATC region show that the two-step heuristic algorithm gives as good results as the CPLEX in less or equal time in most cases.

  1. Uncertainties in Tidally Adjusted Estimates of Sea Level Rise Flooding (Bathtub Model for the Greater London

    Directory of Open Access Journals (Sweden)

    Ali P. Yunus

    2016-04-01

    Full Text Available Sea-level rise (SLR from global warming may have severe consequences for coastal cities, particularly when combined with predicted increases in the strength of tidal surges. Predicting the regional impact of SLR flooding is strongly dependent on the modelling approach and accuracy of topographic data. Here, the areas under risk of sea water flooding for London boroughs were quantified based on the projected SLR scenarios reported in Intergovernmental Panel on Climate Change (IPCC fifth assessment report (AR5 and UK climatic projections 2009 (UKCP09 using a tidally-adjusted bathtub modelling approach. Medium- to very high-resolution digital elevation models (DEMs are used to evaluate inundation extents as well as uncertainties. Depending on the SLR scenario and DEMs used, it is estimated that 3%–8% of the area of Greater London could be inundated by 2100. The boroughs with the largest areas at risk of flooding are Newham, Southwark, and Greenwich. The differences in inundation areas estimated from a digital terrain model and a digital surface model are much greater than the root mean square error differences observed between the two data types, which may be attributed to processing levels. Flood models from SRTM data underestimate the inundation extent, so their results may not be reliable for constructing flood risk maps. This analysis provides a broad-scale estimate of the potential consequences of SLR and uncertainties in the DEM-based bathtub type flood inundation modelling for London boroughs.

  2. Validation, replication, and sensitivity testing of Heckman-type selection models to adjust estimates of HIV prevalence.

    Directory of Open Access Journals (Sweden)

    Samuel J Clark

    Full Text Available A recent study using Heckman-type selection models to adjust for non-response in the Zambia 2007 Demographic and Health Survey (DHS found a large correction in HIV prevalence for males. We aim to validate this finding, replicate the adjustment approach in other DHSs, apply the adjustment approach in an external empirical context, and assess the robustness of the technique to different adjustment approaches. We used 6 DHSs, and an HIV prevalence study from rural South Africa to validate and replicate the adjustment approach. We also developed an alternative, systematic model of selection processes and applied it to all surveys. We decomposed corrections from both approaches into rate change and age-structure change components. We are able to reproduce the adjustment approach for the 2007 Zambia DHS and derive results comparable with the original findings. We are able to replicate applying the approach in several other DHSs. The approach also yields reasonable adjustments for a survey in rural South Africa. The technique is relatively robust to how the adjustment approach is specified. The Heckman selection model is a useful tool for assessing the possibility and extent of selection bias in HIV prevalence estimates from sample surveys.

  3. Evaluation of some infiltration models and hydraulic parameters

    Energy Technology Data Exchange (ETDEWEB)

    Haghighi, F.; Gorji, M.; Shorafa, M.; Sarmadian, F.; Mohammadi, M. H.

    2010-07-01

    The evaluation of infiltration characteristics and some parameters of infiltration models such as sorptivity and final steady infiltration rate in soils are important in agriculture. The aim of this study was to evaluate some of the most common models used to estimate final soil infiltration rate. The equality of final infiltration rate with saturated hydraulic conductivity (Ks) was also tested. Moreover, values of the estimated sorptivity from the Philips model were compared to estimates by selected pedotransfer functions (PTFs). The infiltration experiments used the doublering method on soils with two different land uses in the Taleghan watershed of Tehran province, Iran, from September to October, 2007. The infiltration models of Kostiakov-Lewis, Philip two-term and Horton were fitted to observed infiltration data. Some parameters of the models and the coefficient of determination goodness of fit were estimated using MATLAB software. The results showed that, based on comparing measured and model-estimated infiltration rate using root mean squared error (RMSE), Hortons model gave the best prediction of final infiltration rate in the experimental area. Laboratory measured Ks values gave significant differences and higher values than estimated final infiltration rates from the selected models. The estimated final infiltration rate was not equal to laboratory measured Ks values in the study area. Moreover, the estimated sorptivity factor by Philips model was significantly different to those estimated by selected PTFs. It is suggested that the applicability of PTFs is limited to specific, similar conditions. (Author) 37 refs.

  4. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rasmuson; K. Rautenstrauch

    2004-09-14

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.

  5. Estimating model parameters in nonautonomous chaotic systems using synchronization

    Science.gov (United States)

    Yang, Xiaoli; Xu, Wei; Sun, Zhongkui

    2007-05-01

    In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation.

  6. Estimating model parameters in nonautonomous chaotic systems using synchronization

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xiaoli [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)]. E-mail: yangxl205@mail.nwpu.edu.cn; Xu, Wei [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China); Sun, Zhongkui [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)

    2007-05-07

    In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation.

  7. Soil-Related Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    A. J. Smith

    2004-09-09

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure

  8. Multiscale Parameter Regionalization for consistent global water resources modelling

    Science.gov (United States)

    Wanders, Niko; Wood, Eric; Pan, Ming; Samaniego, Luis; Thober, Stephan; Kumar, Rohini; Sutanudjaja, Edwin; van Beek, Rens; Bierkens, Marc F. P.

    2017-04-01

    Due to an increasing demand for high- and hyper-resolution water resources information, it has become increasingly important to ensure consistency in model simulations across scales. This consistency can be ensured by scale independent parameterization of the land surface processes, even after calibration of the water resource model. Here, we use the Multiscale Parameter Regionalization technique (MPR, Samaniego et al. 2010, WRR) to allow for a novel, spatially consistent, scale independent parameterization of the global water resource model PCR-GLOBWB. The implementation of MPR in PCR-GLOBWB allows for calibration at coarse resolutions and subsequent parameter transfer to the hyper-resolution. In this study, the model was calibrated at 50 km resolution over Europe and validation carried out at resolutions of 50 km, 10 km and 1 km. MPR allows for a direct transfer of the calibrated transfer function parameters across scales and we find that we can maintain consistent land-atmosphere fluxes across scales. Here we focus on the 2003 European drought and show that the new parameterization allows for high-resolution calibrated simulations of water resources during the drought. For example, we find a reduction from 29% to 9.4% in the percentile difference in the annual evaporative flux across scales when compared against default simulations. Soil moisture errors are reduced from 25% to 6.9%, clearly indicating the benefits of the MPR implementation. This new parameterization allows us to show more spatial detail in water resources simulations that are consistent across scales and also allow validation of discharge for smaller catchments, even with calibrations at a coarse 50 km resolution. The implementation of MPR allows for novel high-resolution calibrated simulations of a global water resources model, providing calibrated high-resolution model simulations with transferred parameter sets from coarse resolutions. The applied methodology can be transferred to other

  9. Model and parameter uncertainty in IDF relationships under climate change

    Science.gov (United States)

    Chandra, Rupa; Saha, Ujjwal; Mujumdar, P. P.

    2015-05-01

    Quantifying distributional behavior of extreme events is crucial in hydrologic designs. Intensity Duration Frequency (IDF) relationships are used extensively in engineering especially in urban hydrology, to obtain return level of extreme rainfall event for a specified return period and duration. Major sources of uncertainty in the IDF relationships are due to insufficient quantity and quality of data leading to parameter uncertainty due to the distribution fitted to the data and uncertainty as a result of using multiple GCMs. It is important to study these uncertainties and propagate them to future for accurate assessment of return levels for future. The objective of this study is to quantify the uncertainties arising from parameters of the distribution fitted to data and the multiple GCM models using Bayesian approach. Posterior distribution of parameters is obtained from Bayes rule and the parameters are transformed to obtain return levels for a specified return period. Markov Chain Monte Carlo (MCMC) method using Metropolis Hastings algorithm is used to obtain the posterior distribution of parameters. Twenty six CMIP5 GCMs along with four RCP scenarios are considered for studying the effects of climate change and to obtain projected IDF relationships for the case study of Bangalore city in India. GCM uncertainty due to the use of multiple GCMs is treated using Reliability Ensemble Averaging (REA) technique along with the parameter uncertainty. Scale invariance theory is employed for obtaining short duration return levels from daily data. It is observed that the uncertainty in short duration rainfall return levels is high when compared to the longer durations. Further it is observed that parameter uncertainty is large compared to the model uncertainty.

  10. Multi-Period Model of Portfolio Investment and Adjustment Based on Hybrid Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    RONG Ximin; LU Meiping; DENG Lin

    2009-01-01

    This paper proposes a multi-period portfolio investment model with class constraints, transaction cost, and indivisible securities. When an investor joins the securities market for the first time, he should decide on portfolio investment based on the practical conditions of securities market. In addition, investors should adjust the portfolio according to market changes, changing or not changing the category of risky securities. Markowitz mean-variance approach is applied to the multi-period portfolio selection problems. Because the sub-models are optimal mixed integer program, whose objective function is not unimodal and feasible set is with a particular structure, traditional optimization method usually fails to find a globally optimal solution. So this paper employs the hybrid genetic algorithm to solve the problem. Investment policies that accord with finance market and are easy to operate for investors are put forward with an illustration of application.

  11. [Analysis of the nutritional parameters and adjustment of the requirements of the initial parenteral nutrition in post surgical critically ill patients].

    Science.gov (United States)

    Herrero Domínguez-Berrueta, M Carmen; Martín de Rosales Cabrera, Ana María; Pérez Encinas, Montserrat

    2014-02-01

    To analyze nutritional parameters in critical post-surgical patients under stressful conditions, their evolution, and to assess the degree of adjustment of initial parenteral nutrition (PN) to the requirements set for in the recently published recommendations. Observational, retrospective study including post-surgical critically ill patients admitted to the post-surgical reanimation unit (RU) in whom PN was prescribed, in 2011. Demographical, anthropometric, diagnosis, nutritional parameters, mortality, total duration of hospitalization and duration of hospitalization at the RU, and complications were gathered. The type of PN prescribed was compared, with individualization of the requirements by Kg of body weight, according to the latest recommendations published on nutrition of critically ill patients (ASPEN, ESPEN, SENPE): 18-30 kcal/kg, 0.8-1.5 g/kg/proteins, 4 mg/kg/min/glucose and 2-3 mg/kg/min/glucose in patients with stress-related hyperglycemia, and 0.5-1 g/kg/day of lipids. The variables analyzed were caloric, protein, and glucose adjustments in the initial PN, recovering of albumin > 3 g/dL at day 10, and likely association with the number of complications, mortality and hospital stay. 60 patients were analyzed. 23.3% (14/60) presented hyponutrition at admission, with significant weight loss before the intervention. Albumin, a negative acute phase reactant, was significantly low at baseline, on average 1.9 g/dL (95%CI 1.83-2.12), which indicates a high level of metabolic stress in post-surgical patients. Prescribed PNs were adjusted to the recommendations for kcal, proteins and lipids in 68.3%, 71.7%, and 80.4%, respectively. 57.1% were adjusted for glucose, although the intake from fluid therapy was not taken into account. In patients with a BMI 3 g/dL at day 10, and the mortality, the duration of hospitalization at the RU, and the number of complications were significantly lower in these patients than in those not recuperating their albumin levels (p

  12. A model of the western Laurentide Ice Sheet, using observations of glacial isostatic adjustment

    Science.gov (United States)

    Gowan, Evan J.; Tregoning, Paul; Purcell, Anthony; Montillet, Jean-Philippe; McClusky, Simon

    2016-05-01

    We present the results of a new numerical model of the late glacial western Laurentide Ice Sheet, constrained by observations of glacial isostatic adjustment (GIA), including relative sea level indicators, uplift rates from permanent GPS stations, contemporary differential lake level change, and postglacial tilt of glacial lake level indicators. The later two datasets have been underutilized in previous GIA based ice sheet reconstructions. The ice sheet model, called NAICE, is constructed using simple ice physics on the basis of changing margin location and basal shear stress conditions in order to produce ice volumes required to match GIA. The model matches the majority of the observations, while maintaining a relatively realistic ice sheet geometry. Our model has a peak volume at 18,000 yr BP, with a dome located just east of Great Slave Lake with peak thickness of 4000 m, and surface elevation of 3500 m. The modelled ice volume loss between 16,000 and 14,000 yr BP amounts to about 7.5 m of sea level equivalent, which is consistent with the hypothesis that a large portion of Meltwater Pulse 1A was sourced from this part of the ice sheet. The southern part of the ice sheet was thin and had a low elevation profile. This model provides an accurate representation of ice thickness and paleo-topography, and can be used to assess present day uplift and infer past climate.

  13. Reduced parameter model on trajectory tracking data with applications

    Institute of Scientific and Technical Information of China (English)

    王正明; 朱炬波

    1999-01-01

    The data fusion in tracking the same trajectory by multi-measurernent unit (MMU) is considered. Firstly, the reduced parameter model (RPM) of trajectory parameter (TP), system error and random error are presented,and then the RPM on trajectory tracking data (TTD) is obtained, a weighted method on measuring elements (ME) is studied and criteria on selection of ME based on residual and accuracy estimation are put forward. According to RPM,the problem about selection of ME and self-calibration of TTD is thoroughly investigated. The method improves data accuracy in trajectory tracking obviously and gives accuracy evaluation of trajectory tracking system simultaneously.

  14. Parameter Estimation of the Extended Vasiček Model

    OpenAIRE

    Rujivan, Sanae

    2010-01-01

    In this paper, an estimate of the drift and diffusion parameters of the extended Vasiček model is presented. The estimate is based on the method of maximum likelihood. We derive a closed-form expansion for the transition (probability) density of the extended Vasiček process and use the expansion to construct an approximate log-likelihood function of a discretely sampled data of the process. Approximate maximum likelihood estimators (AMLEs) of the parameters are obtained by maximizing the appr...

  15. Prediction of mortality rates using a model with stochastic parameters

    Science.gov (United States)

    Tan, Chon Sern; Pooi, Ah Hin

    2016-10-01

    Prediction of future mortality rates is crucial to insurance companies because they face longevity risks while providing retirement benefits to a population whose life expectancy is increasing. In the past literature, a time series model based on multivariate power-normal distribution has been applied on mortality data from the United States for the years 1933 till 2000 to forecast the future mortality rates for the years 2001 till 2010. In this paper, a more dynamic approach based on the multivariate time series will be proposed where the model uses stochastic parameters that vary with time. The resulting prediction intervals obtained using the model with stochastic parameters perform better because apart from having good ability in covering the observed future mortality rates, they also tend to have distinctly shorter interval lengths.

  16. Probabilistic Constraint Programming for Parameters Optimisation of Generative Models

    CERN Document Server

    Zanin, Massimiliano; Sousa, Pedro A C; Cruz, Jorge

    2015-01-01

    Complex networks theory has commonly been used for modelling and understanding the interactions taking place between the elements composing complex systems. More recently, the use of generative models has gained momentum, as they allow identifying which forces and mechanisms are responsible for the appearance of given structural properties. In spite of this interest, several problems remain open, one of the most important being the design of robust mechanisms for finding the optimal parameters of a generative model, given a set of real networks. In this contribution, we address this problem by means of Probabilistic Constraint Programming. By using as an example the reconstruction of networks representing brain dynamics, we show how this approach is superior to other solutions, in that it allows a better characterisation of the parameters space, while requiring a significantly lower computational cost.

  17. Mark-recapture models with parameters constant in time.

    Science.gov (United States)

    Jolly, G M

    1982-06-01

    The Jolly-Seber method, which allows for both death and immigration, is easy to apply but often requires a larger number of parameters to be estimated tha would otherwise be necessary. If (i) survival rate, phi, or (ii) probability of capture, p, or (iii) both phi and p can be assumed constant over the experimental period, models with a reduced number of parameters are desirable. In the present paper, maximum likelihood (ML) solutions for these three situations are derived from the general ML equations of Jolly [1979, in Sampling Biological Populations, R. M. Cormack, G. P. Patil and D. S. Robson (eds), 277-282]. A test is proposed for heterogeneity arising from a breakdown of assumptions in the general Jolly-Seber model. Tests for constancy of phi and p are provided. An example is given, in which these models are fitted to data from a local butterfly population.

  18. Procedures for adjusting regional regression models of urban-runoff quality using local data

    Science.gov (United States)

    Hoos, A.B.; Sisolak, J.K.

    1993-01-01

    Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for

  19. Enhancing debris flow modeling parameters integrating Bayesian networks

    Science.gov (United States)

    Graf, C.; Stoffel, M.; Grêt-Regamey, A.

    2009-04-01

    Applied debris-flow modeling requires suitably constraint input parameter sets. Depending on the used model, there is a series of parameters to define before running the model. Normally, the data base describing the event, the initiation conditions, the flow behavior, the deposition process and mainly the potential range of possible debris flow events in a certain torrent is limited. There are only some scarce places in the world, where we fortunately can find valuable data sets describing event history of debris flow channels delivering information on spatial and temporal distribution of former flow paths and deposition zones. Tree-ring records in combination with detailed geomorphic mapping for instance provide such data sets over a long time span. Considering the significant loss potential associated with debris-flow disasters, it is crucial that decisions made in regard to hazard mitigation are based on a consistent assessment of the risks. This in turn necessitates a proper assessment of the uncertainties involved in the modeling of the debris-flow frequencies and intensities, the possible run out extent, as well as the estimations of the damage potential. In this study, we link a Bayesian network to a Geographic Information System in order to assess debris-flow risk. We identify the major sources of uncertainty and show the potential of Bayesian inference techniques to improve the debris-flow model. We model the flow paths and deposition zones of a highly active debris-flow channel in the Swiss Alps using the numerical 2-D model RAMMS. Because uncertainties in run-out areas cause large changes in risk estimations, we use the data of flow path and deposition zone information of reconstructed debris-flow events derived from dendrogeomorphological analysis covering more than 400 years to update the input parameters of the RAMMS model. The probabilistic model, which consistently incorporates this available information, can serve as a basis for spatial risk

  20. Last deglacial relative sea level variations in Antarctica derived from glacial isostatic adjustment modelling

    Directory of Open Access Journals (Sweden)

    Jun'ichi Okuno

    2013-11-01

    Full Text Available We present relative sea level (RSL curves in Antarctica derived from glacial isostatic adjustment (GIA predictions based on the melting scenarios of the Antarctic ice sheet since the Last Glacial Maximum (LGM given in previous works. Simultaneously, Holocene-age RSL observations obtained at the raised beaches along the coast of Antarctica are shown to be in agreement with the GIA predictions. The differences from previously published ice-loading models regarding the spatial distribution and total mass change of the melted ice are significant. These models were also derived from GIA modelling; the variations can be attributed to the lack of geological and geographical evidence regarding the history of crustal movement due to ice sheet evolution. Next, we summarise the previously published ice load models and demonstrate the RSL curves based on combinations of different ice and earth models. The RSL curves calculated by GIA models indicate that the model dependence of both the ice and earth models is significantly large at several sites where RSL observations were obtained. In particular, GIA predictions based on the thin lithospheric thickness show the spatial distributions that are dependent on the melted ice thickness at each sites. These characteristics result from the short-wavelength deformation of the Earth. However, our predictions strongly suggest that it is possible to find the average ice model despite the use of the different models of lithospheric thickness. By sea level and crustal movement observations, we can deduce the geometry of the post-LGM ice sheets in detail and remove the GIA contribution from the crustal deformation and gravity change observed by space geodetic techniques, such as GPS and GRACE, for the estimation of the Antarctic ice mass change associated with recent global warming.

  1. Adjustment of automatic control systems of production facilities at coal processing plants using multivariant physico- mathematical models

    Science.gov (United States)

    Evtushenko, V. F.; Myshlyaev, L. P.; Makarov, G. V.; Ivushkin, K. A.; Burkova, E. V.

    2016-10-01

    The structure of multi-variant physical and mathematical models of control system is offered as well as its application for adjustment of automatic control system (ACS) of production facilities on the example of coal processing plant.

  2. Measurement of the Economic Growth and Add-on of the R.M. Solow Adjusted Model

    Directory of Open Access Journals (Sweden)

    Ion Gh. Rosca

    2007-08-01

    Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans model etc., the R.M. Solow model is part of the category which characterizes the economic growth.The paper aim is the economic growth measurement and add-on of the R.M. Solow adjusted model.

  3. Singularity of Some Software Reliability Models and Parameter Estimation Method

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.

  4. Parameter Identifiability of Ship Manoeuvring Modeling Using System Identification

    Directory of Open Access Journals (Sweden)

    Weilin Luo

    2016-01-01

    Full Text Available To improve the feasibility of system identification in the prediction of ship manoeuvrability, several measures are presented to deal with the parameter identifiability in the parametric modeling of ship manoeuvring motion based on system identification. Drift of nonlinear hydrodynamic coefficients is explained from the point of view of regression analysis. To diminish the multicollinearity in a complicated manoeuvring model, difference method and additional signal method are employed to reconstruct the samples. Moreover, the structure of manoeuvring model is simplified based on correlation analysis. Manoeuvring simulation is performed to demonstrate the validity of the measures proposed.

  5. Robust linear parameter varying induction motor control with polytopic models

    Directory of Open Access Journals (Sweden)

    Dalila Khamari

    2013-01-01

    Full Text Available This paper deals with a robust controller for an induction motor which is represented as a linear parameter varying systems. To do so linear matrix inequality (LMI based approach and robust Lyapunov feedback controller are associated. This new approach is related to the fact that the synthesis of a linear parameter varying (LPV feedback controller for the inner loop take into account rotor resistance and mechanical speed as varying parameter. An LPV flux observer is also synthesized to estimate rotor flux providing reference to cited above regulator. The induction motor is described as a polytopic model because of speed and rotor resistance affine dependence their values can be estimated on line during systems operations. The simulation results are presented to confirm the effectiveness of the proposed approach where robustness stability and high performances have been achieved over the entire operating range of the induction motor.

  6. Minimum information modelling of structural systems with uncertain parameters

    Science.gov (United States)

    Hyland, D. C.

    1983-01-01

    Work is reviewed wherein the design of active structural control is formulated as the mean-square optimal control of a linear mechanical system with stochastic parameters. In practice, a complete probabilistic description of model parameters can never be provided by empirical determinations, and a suitable design approach must accept very limited a priori data on parameter statistics. In consequence, the mean-square optimization problem is formulated using a complete probability assignment which is made to be consistent with available data but maximally unconstrained otherwise through use of a maximum entropy principle. The ramifications of this approach for both robustness and large dimensionality are illustrated by consideration of the full-state feedback regulation problem.

  7. Parameter estimation in a spatial unit root autoregressive model

    CERN Document Server

    Baran, Sándor

    2011-01-01

    Spatial autoregressive model $X_{k,\\ell}=\\alpha X_{k-1,\\ell}+\\beta X_{k,\\ell-1}+\\gamma X_{k-1,\\ell-1}+\\epsilon_{k,\\ell}$ is investigated in the unit root case, that is when the parameters are on the boundary of the domain of stability that forms a tetrahedron with vertices $(1,1,-1), \\ (1,-1,1),\\ (-1,1,1)$ and $(-1,-1,-1)$. It is shown that the limiting distribution of the least squares estimator of the parameters is normal and the rate of convergence is $n$ when the parameters are in the faces or on the edges of the tetrahedron, while on the vertices the rate is $n^{3/2}$.

  8. Two models to compute an adjusted Green Vegetation Fraction taking into account the spatial variability of soil NDVI

    Science.gov (United States)

    Montandon, L. M.; Small, E.

    2008-12-01

    The green vegetation fraction (Fg) is an important climate and hydrologic model parameter. The commonly- used Fg model is a simple linear mixing of two NDVI end-members: bare soil NDVI (NDVIo) and full vegetation NDVI (NDVI∞). NDVI∞ is generally set as a percentile of the historical maximum NDVI for each land cover. This approach works well for areas where Fg reaches full cover (100%). Because many biomes do not reach Fg=0, however, NDVIo is often determined as a single invariant value for all land cover types. In general, it is selected among the lowest NDVI observed over bare or desert areas, yielding NDVIo close to zero. There are two issues with this approach: large-scale variability of soil NDVI is ignored and observations on a wide range of soils show that soil NDVI is often larger. Here we introduce and test two new approaches to compute Fg that takes into account the spatial variability of soil NDVI. The first approach uses a global soil NDVI database and time series of MODIS NDVI data over the conterminous United States to constrain possible soil NDVI values over each pixel. Fg is computed using a subset of the soils database that respects the linear mixing model condition NDVIo≤NDVIh, where NDVIh is the pixel historical minimum. The second approach uses an empirical soil NDVI model that combines information of soil organic matter content and texture to infer soil NDVI. The U.S. General Soil Map (STATSGO2) database is used as input for spatial soil properties. Using in situ measurements of soil NDVI from sites that span a range of land cover types, we test both models and compare their performance to the standard Fg model. We show that our models adjust the temporal Fg estimates by 40-90% depending on the land cover type and amplitude of the seasonal NDVI signal. Using MODIS NDVI and soil maps over the conterminous U.S., we also study the spatial distribution of Fg adjustments in February and June 2008. We show that the standard Fg method

  9. Adjusting Felder-Silverman learning styles model for application in adaptive e-learning

    Directory of Open Access Journals (Sweden)

    Mihailović Đorđe

    2012-01-01

    Full Text Available This paper presents an approach for adjusting Felder-Silverman learning styles model for application in development of adaptive e-learning systems. Main goal of the paper is to improve the existing e-learning courses by developing a method for adaptation based on learning styles. The proposed method includes analysis of data related to students characteristics and applying the concept of personalization in creating e-learning courses. The research has been conducted at Faculty of organizational sciences, University of Belgrade, during winter semester of 2009/10, on sample of 318 students. The students from the experimental group were divided in three clusters, based on data about their styles identified using adjusted Felder-Silverman questionnaire. Data about learning styles collected during the research were used to determine typical groups of students and then to classify students into these groups. The classification was performed using data mining techniques. Adaptation of the e-learning courses was implemented according to results of data analysis. Evaluation showed that there was statistically significant difference in the results of students who attended the course adapted by using the described method, in comparison with results of students who attended course that was not adapted.

  10. A Comparative Study of CAPM and Seven Factors Risk Adjusted Return Model

    Directory of Open Access Journals (Sweden)

    Madiha Riaz Bhatti

    2014-12-01

    Full Text Available This study is a comparison and contrast of the predictive powers of two asset pricing models: CAPM and seven factor risk-return adjusted model, to explain the cross section of stock rate of returns in the financial sector listed at Karachi Stock Exchange (KSE. To test the models daily returns from January 2013 to February 2014 have been taken and the excess returns of portfolios are regressed on explanatory variables. The results of the tested models indicate that the models are valid and applicable in the financial market of Pakistan during the period under study, as the intercepts are not significantly different from zero. It is consequently established from the findings that all the explanatory variables explain the stock returns in the financial sector of KSE. In addition, the results of this study show that addition of more explanatory variables to the single factor CAPM results in reasonably high values of R2. These results provide substantial support to fund managers, investors and financial analysts in making investment decisions.

  11. Adjustment and Development of Health User’s Mental Model Completeness Scale in Search Engines

    Directory of Open Access Journals (Sweden)

    Maryam Nakhoda

    2016-10-01

    Full Text Available Introduction: Users’ performance and their interaction with information retrieval systems can be observed in development of their mental models. Users, especially users of health, use mental models to facilitate their interactions with these systems and incomplete or incorrect models can cause problems for them . The aim of this study was the adjustment and development of health user’s mental model completeness scale in search engines. Method: This quantitative study uses Delphi method. Among various scales for users’ mental model completeness, Li’s scale was selected and some items were added to this scale based on previous valid literature. Delphi panel members were selected using purposeful sampling method, consisting of 20 and 18 participants in the first and second rounds, respectively. Kendall’s Coefficient of Concordance in SPSS version 16 was used as basis for agreement (95% confidence. Results:The Kendall coefficient of Concordance (W was calculated to be 0.261(P-value<0.001 for the first and 0.336 (P-value<0.001 for the second round. Therefore, the study was found to be statistically significant with 95% confidence. Since the increase in the coefficient in two consecutive rounds was very little (equal to 0.075, surveying the panel members were stopped based on second Schmidt criterion and Delphi method was stopped after the second round. Finally, the dimensions of Li’s scale (existence and nature, search characteristics and levels of interaction were confirmed again, but “indexing of pages or websites” was eliminated and “Difference between results of different search engines”, “possibility of access to similar or related webpages”, and “possibility of search for special formats and multimedia” were added to Li’s scale. Conclusion: In this study, the scale for mental model completeness of health users was adjusted and developed; it can help the designers of information retrieval systems in systematic

  12. A data-driven model of present-day glacial isostatic adjustment in North America

    Science.gov (United States)

    Simon, Karen; Riva, Riccardo

    2016-04-01

    Geodetic measurements of gravity change and vertical land motion are incorporated into an a priori model of present-day glacial isostatic adjustment (GIA) via least-squares inversion. The result is an updated model of present-day GIA wherein the final predicted signal is informed by both observational data with realistic errors, and prior knowledge of GIA inferred from forward models. This method and other similar techniques have been implemented within a limited but growing number of GIA studies (e.g., Hill et al. 2010). The combination method allows calculation of the uncertainties of predicted GIA fields, and thus offers a significant advantage over predictions from purely forward GIA models. Here, we show the results of using the combination approach to predict present-day rates of GIA in North America through the incorporation of both GPS-measured vertical land motion rates and GRACE-measured gravity observations into the prior model. In order to assess the influence of each dataset on the final GIA prediction, the vertical motion and gravimetry datasets are incorporated into the model first independently (i.e., one dataset only), then simultaneously. Because the a priori GIA model and its associated covariance are developed by averaging predictions from a suite of forward models that varies aspects of the Earth rheology and ice sheet history, the final GIA model is not independent of forward model predictions. However, we determine the sensitivity of the final model result to the prior GIA model information by using different representations of the input model covariance. We show that when both datasets are incorporated into the inversion, the final model adequately predicts available observational constraints, minimizes the uncertainty associated with the forward modelled GIA inputs, and includes a realistic estimation of the formal error associated with the GIA process. Along parts of the North American coastline, improved predictions of the long-term (kyr

  13. An amino acid substitution-selection model adjusts residue fitness to improve phylogenetic estimation.

    Science.gov (United States)

    Wang, Huai-Chun; Susko, Edward; Roger, Andrew J

    2014-04-01

    Standard protein phylogenetic models use fixed rate matrices of amino acid interchange derived from analyses of large databases. Differences between the stationary amino acid frequencies of these rate matrices from those of a data set of interest are typically adjusted for by matrix multiplication that converts the empirical rate matrix to an exchangeability matrix which is then postmultiplied by the amino acid frequencies in the alignment. The result is a time-reversible rate matrix with stationary amino acid frequencies equal to the data set frequencies. On the basis of population genetics principles, we develop an amino acid substitution-selection model that parameterizes the fitness of an amino acid as the logarithm of the ratio of the frequency of the amino acid to the frequency of the same amino acid under no selection. The model gives rise to a different sequence of matrix multiplications to convert an empirical rate matrix to one that has stationary amino acid frequencies equal to the data set frequencies. We incorporated the substitution-selection model with an improved amino acid class frequency mixture (cF) model to partially take into account site-specific amino acid frequencies in the phylogenetic models. We show that 1) the selection models fit data significantly better than corresponding models without selection for most of the 21 test data sets; 2) both cF and cF selection models favored the phylogenetic trees that were inferred under current sophisticated models and methods for three difficult phylogenetic problems (the positions of microsporidia and breviates in eukaryote phylogeny and the position of the root of the angiosperm tree); and 3) for data simulated under site-specific residue frequencies, the cF selection models estimated trees closer to the generating trees than a standard Г model or cF without selection. We also explored several ways of estimating amino acid frequencies under neutral evolution that are required for these selection

  14. Recursive modular modelling methodology for lumped-parameter dynamic systems.

    Science.gov (United States)

    Orsino, Renato Maia Matarazzo

    2017-08-01

    This paper proposes a novel approach to the modelling of lumped-parameter dynamic systems, based on representing them by hierarchies of mathematical models of increasing complexity instead of a single (complex) model. Exploring the multilevel modularity that these systems typically exhibit, a general recursive modelling methodology is proposed, in order to conciliate the use of the already existing modelling techniques. The general algorithm is based on a fundamental theorem that states the conditions for computing projection operators recursively. Three procedures for these computations are discussed: orthonormalization, use of orthogonal complements and use of generalized inverses. The novel methodology is also applied for the development of a recursive algorithm based on the Udwadia-Kalaba equation, which proves to be identical to the one of a Kalman filter for estimating the state of a static process, given a sequence of noiseless measurements representing the constraints that must be satisfied by the system.

  15. The Trauma Outcome Process Assessment Model: A Structural Equation Model Examination of Adjustment

    Science.gov (United States)

    Borja, Susan E.; Callahan, Jennifer L.

    2009-01-01

    This investigation sought to operationalize a comprehensive theoretical model, the Trauma Outcome Process Assessment, and test it empirically with structural equation modeling. The Trauma Outcome Process Assessment reflects a robust body of research and incorporates known ecological factors (e.g., family dynamics, social support) to explain…

  16. Parameters adjustment in cognition radio spectrum allocation based on game theory%基于博弈论的认知无线电频谱分配

    Institute of Scientific and Technical Information of China (English)

    张北伟; 胡琨元; 朱云龙

    2012-01-01

    With regard to the dynamic spectrum allocation on wireless cognitive network, a dynamic Bertrand game algorithm of the channel pricing of licensed users was proposed using Bertrand equilibrium. Then, the relationship between stability of Nash equilibrium and speed parameter adjustment was analyzed. Consequently, step response function was utilized to replace the non-concussive process of game, and three-value method was proposed for getting step response parameters. The simulation results show that the proposed algorithm can obtain stable channel price when the value of speed parameter is less than 0. 04. Besides, the feasibility of using a step function to analyze the concussion game process is proved, and this method is convenient for licensed users to make real-time price and bring more economic benefits.%为了解决频谱分配中的授权用户定价博弈问题,根据博弈论中的Bertrand均衡理论,提出了基于Bertrand模型的授权用户信道价格竞争的动态博弈算法.分析了稳定的纳什均衡解与速率调整参数的关系,用控制理论中阶跃函数研究价格无震荡博弈过程,提出了三值法确定阶跃响应参数.仿真结果表明,当数率调整参数在小于0.04时,可以获得稳定的信道价格;同时,验证了用阶跃函数分析无震荡博弈过程的可行性,方便授权用户快速实时定价,带来更大的经济效益.

  17. [A study of coordinates transform iterative fitting method to extract bio-impedance model parameters bio-impedance model parameters].

    Science.gov (United States)

    Zhou, Liming; Yang, Yuxing; Yuan, Shiying

    2006-02-01

    A new algorithm, the coordinates transform iterative optimizing method based on the least square curve fitting model, is presented. This arithmetic is used for extracting the bio-impedance model parameters. It is superior to other methods, for example, its speed of the convergence is quicker, and its calculating precision is higher. The objective to extract the model parameters, such as Ri, Re, Cm and alpha, has been realized rapidly and accurately. With the aim at lowering the power consumption, decreasing the price and improving the price-to-performance ratio, a practical bio-impedance measure system with double CPUs has been built. It can be drawn from the preliminary results that the intracellular resistance Ri increased largely with an increase in working load during sitting, which reflects the ischemic change of lower limbs.

  18. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking.

    Science.gov (United States)

    Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J

    2014-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.

  19. Individualization of the parameters of the three-elements Windkessel model using carotid pulse signal

    Science.gov (United States)

    Żyliński, Marek; Niewiadomski, Wiktor; Strasz, Anna; GÄ siorowska, Anna; Berka, Martin; Młyńczak, Marcel; Cybulski, Gerard

    2015-09-01

    The haemodynamics of the arterial system can be described by the three-elements Windkessel model. As it is a lumped model, it does not account for pulse wave propagation phenomena: pulse wave velocity, reflection, and pulse pressure profile changes during propagation. The Modelflowmethod uses this model to calculate stroke volume and total peripheral resistance (TPR) from pulse pressure obtained from finger; the reliability of this method is questioned. The model parameters are: aortic input impedance (Zo), TPR, and arterial compliance (Cw). They were obtained from studies of human aorta preparation. Individual adjustment is performed based on the subject's age and gender. As Cw is also affected by diseases, this may lead to inaccuracies. Moreover, the Modelflowmethod transforms the pulse pressure recording from the finger (Finapres©) into a remarkably different pulse pressure in the aorta using a predetermined transfer function — another source of error. In the present study, we indicate a way to include in the Windkessel model information obtained by adding carotid pulse recording to the finger pressure measurement. This information allows individualization of the values of Cw and Zo. It also seems reasonable to utilize carotid pulse, which better reflects aortic pressure, to individualize the transfer function. Despite its simplicity, the Windkessel model describes essential phenomena in the arterial system remarkably well; therefore, it seems worthwhile to check whether individualization of its parameters would increase the reliability of results obtained with this model.

  20. Propagation channel characterization, parameter estimation, and modeling for wireless communications

    CERN Document Server

    Yin, Xuefeng

    2016-01-01

    Thoroughly covering channel characteristics and parameters, this book provides the knowledge needed to design various wireless systems, such as cellular communication systems, RFID and ad hoc wireless communication systems. It gives a detailed introduction to aspects of channels before presenting the novel estimation and modelling techniques which can be used to achieve accurate models. To systematically guide readers through the topic, the book is organised in three distinct parts. The first part covers the fundamentals of the characterization of propagation channels, including the conventional single-input single-output (SISO) propagation channel characterization as well as its extension to multiple-input multiple-output (MIMO) cases. Part two focuses on channel measurements and channel data post-processing. Wideband channel measurements are introduced, including the equipment, technology and advantages and disadvantages of different data acquisition schemes. The channel parameter estimation methods are ...

  1. Auxiliary Parameter MCMC for Exponential Random Graph Models

    Science.gov (United States)

    Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro

    2016-11-01

    Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.

  2. A risk-adjusted CUSUM in continuous time based on the Cox model.

    Science.gov (United States)

    Biswas, Pinaki; Kalbfleisch, John D

    2008-07-30

    In clinical practice, it is often important to monitor the outcomes associated with participating facilities. In organ transplantation, for example, it is important to monitor and assess the outcomes of the transplants performed at the participating centers and send a signal if a significant upward trend in the failure rates is detected. In manufacturing and process control contexts, the cumulative summation (CUSUM) technique has been used as a sequential monitoring scheme for some time. More recently, the CUSUM has also been suggested for use in medical contexts. In this article, we outline a risk-adjusted CUSUM procedure based on the Cox model for a failure time outcome. Theoretical approximations to the average run length are obtained for this new proposal and for some discrete time procedures suggested in the literature. The proposed scheme and approximations are evaluated in simulations and illustrated on transplant facility data from the Scientific Registry of Transplant Recipients.

  3. The use of satellites in gravity field determination and model adjustment

    Science.gov (United States)

    Visser, Petrus Nicolaas Anna Maria

    1992-06-01

    Methods to improve gravity field models of the Earth with available data from satellite observations are proposed and discussed. In principle, all types of satellite observations mentioned give information of the satellite orbit perturbations and in conjunction the Earth's gravity field, because the satellite orbits are affected most by the Earth's gravity field. Therefore, two subjects are addressed: representation forms of the gravity field of the Earth and the theory of satellite orbit perturbations. An analytical orbit perturbation theory is presented and shown to be sufficiently accurate for describing satellite orbit perturbations if certain conditions are fulfilled. Gravity field adjustment experiments using the analytical orbit perturbation theory are discussed using real satellite observations. These observations consisted of Seasat laser range measurements and crossover differences, and of Geosat altimeter measurements and crossover differences. A look into the future, particularly relating to the ARISTOTELES (Applications and Research Involving Space Techniques for the Observation of the Earth's field from Low Earth Orbit Spacecraft) mission, is given.

  4. Optimal Scheme Selection of Agricultural Production Structure Adjustment - Based on DEA Model; Punjab (Pakistan)

    Institute of Scientific and Technical Information of China (English)

    Zeeshan Ahmad; Meng Jun; Muhammad Abdullah; Mazhar Nadeem Ishaq; Majid Lateef; Imran Khan

    2015-01-01

    This paper used the modern evaluation method of DEA (Data Envelopment Analysis) to assess the comparative efficiency and then on the basis of this among multiple schemes chose the optimal scheme of agricultural production structure adjustment. Based on the results of DEA model, we dissected scale advantages of each discretionary scheme or plan. We examined scale advantages of each discretionary scheme, tested profoundly a definitive purpose behind not-DEA efficient, which elucidated the system and methodology to enhance these discretionary plans. At the end, another method had been proposed to rank and select the optimal scheme. The research was important to guide the practice if the modification of agricultural production industrial structure was carried on.

  5. UPDATING THE FREIGHT TRUCK STOCK ADJUSTMENT MODEL: 1997 VEHICLE INVENTORY AND USE SURVEY DATA

    Energy Technology Data Exchange (ETDEWEB)

    Davis, S.C.

    2000-11-16

    The Energy Information Administration's (EIA's) National Energy Modeling System (NEMS) Freight Truck Stock Adjustment Model (FTSAM) was created in 1995 relying heavily on input data from the 1992 Economic Census, Truck Inventory and Use Survey (TIUS). The FTSAM is part of the NEMS Transportation Sector Model, which provides baseline energy projections and analyzes the impacts of various technology scenarios on consumption, efficiency, and carbon emissions. The base data for the FTSAM can be updated every five years as new Economic Census information is released. Because of expertise in using the TIUS database, Oak Ridge National Laboratory (ORNL) was asked to assist the EIA when the new Economic Census data were available. ORNL provided the necessary base data from the 1997 Vehicle Inventory and Use Survey (VIUS) and other sources to update the FTSAM. The next Economic Census will be in the year 2002. When those data become available, the EIA will again want to update the FTSAM using the VIUS. This report, which details the methodology of estimating and extracting data from the 1997 VIUS Microdata File, should be used as a guide for generating the data from the next VIUS so that the new data will be as compatible as possible with the data in the model.

  6. Determining avalanche modelling input parameters using terrestrial laser scanning technology

    OpenAIRE

    2013-01-01

    International audience; In dynamic avalanche modelling, data about the volumes and areas of the snow released, mobilized and deposited are key input parameters, as well as the fracture height. The fracture height can sometimes be measured in the field, but it is often difficult to access the starting zone due to difficult or dangerous terrain and avalanche hazards. More complex is determining the areas and volumes of snow involved in an avalanche. Such calculations require high-resolution spa...

  7. Numerical model for thermal parameters in optical materials

    Science.gov (United States)

    Sato, Yoichi; Taira, Takunori

    2016-04-01

    Thermal parameters of optical materials, such as thermal conductivity, thermal expansion, temperature coefficient of refractive index play a decisive role for the thermal design inside laser cavities. Therefore, numerical value of them with temperature dependence is quite important in order to develop the high intense laser oscillator in which optical materials generate excessive heat across mode volumes both of lasing output and optical pumping. We already proposed a novel model of thermal conductivity in various optical materials. Thermal conductivity is a product of isovolumic specific heat and thermal diffusivity, and independent modeling of these two figures should be required from the viewpoint of a clarification of physical meaning. Our numerical model for thermal conductivity requires one material parameter for specific heat and two parameters for thermal diffusivity in the calculation of each optical material. In this work we report thermal conductivities of various optical materials as Y3Al5O12 (YAG), YVO4 (YVO), GdVO4 (GVO), stoichiometric and congruent LiTaO3, synthetic quartz, YAG ceramics and Y2O3 ceramics. The dependence on Nd3+-doping in laser gain media in YAG, YVO and GVO is also studied. This dependence can be described by only additional three parameters. Temperature dependence of thermal expansion and temperature coefficient of refractive index for YAG, YVO, and GVO: these are also included in this work for convenience. We think our numerical model is quite useful for not only thermal analysis in laser cavities or optical waveguides but also the evaluation of physical properties in various transparent materials.

  8. Land Building Models: Uncertainty in and Sensitivity to Input Parameters

    Science.gov (United States)

    2013-08-01

    Louisiana Coastal Area Ecosystem Restoration Projects Study , Vol. 3, Final integrated ERDC/CHL CHETN-VI-44 August 2013 24 feasibility study and... Nourishment Module, Chapter 8. In Coastal Louisiana Ecosystem Assessment and Restoration (CLEAR) Model of Louisiana Coastal Area (LCA) Comprehensive...to Input Parameters by Ty V. Wamsley PURPOSE: The purpose of this Coastal and Hydraulics Engineering Technical Note (CHETN) is to document a

  9. The oblique S parameter in higgsless electroweak models

    CERN Document Server

    Rosell, Ignasi

    2012-01-01

    We present a one-loop calculation of the oblique S parameter within Higgsless models of electroweak symmetry breaking. We have used a general effective Lagrangian with at most two derivatives, implementing the chiral symmetry breaking SU(2)_L x SU(2)_R -> SU(2)_{L+R} with Goldstones, gauge bosons and one multiplet of vector and axial-vector resonances. The estimation is based on the short-distance constraints and the dispersive approach proposed by Peskin and Takeuchi.

  10. A statistical model of proton with no parameter

    CERN Document Server

    Zhang, Y; Zhang, Yongjun; Yang, Li-Ming

    2001-01-01

    In this text, the protons are taken as an ensemble of Fock states. Using detailed balancing principle and equal probability principle, the unpolarized parton distribution of proton is gained through Monte Carlo without any parameter. A new origin of the light flavor sea-quark asymmetry is given here beside known models as Pauli blocking, meson-cloud, chiral-field, chiral-soliton and instantons.

  11. Model of the Stochastic Vacuum and QCD Parameters

    CERN Document Server

    Ferreira, E; Ferreira, Erasmo; Pereira, Flávio

    1997-01-01

    Accounting for the two independent correlation functions of the QCD vacuum, we improve the simple and consistent description given by the model of the stochastic vacuum to the high-energy pp and pbar-p data, with a new determination of parameters of non-perturbative QCD. The increase of the hadronic radii with the energy accounts for the energy dependence of the observables.

  12. Modeling and identification for the adjustable control of generation processes; Modelado e identificacion para el control autoajustable de procesos de generacion

    Energy Technology Data Exchange (ETDEWEB)

    Ricano Castillo, Juan Manuel; Palomares Gonzalez, Daniel [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)

    1989-12-31

    The recursive technique of the method of minimum squares is employed to obtain a multivariable model of the self regressive mobile mean type, needed for the design of a multivariable, self-adjustable controller self adjustable multivariable. In this article the employed technique and the results obtained are described with the characterization of the model structure and the parametric estimation. The convergency velocity curves are observed towards the parameters` numerical values. [Espanol] La tecnica recursiva del metodo de los minimos cuadrados se emplea para obtener un modelo multivariable de tipo autorregresivo de promedio movil, necesario para el diseno de un controlador autoajustable muitivariable. En el articulo, se describe la tecnica empleada y los resultados obtenidos con la caracterizacion de la estructura del modelo y la estimacion parametrica. Se observan las curvas de la velocidad de convergencia hacia los valores numericos de los parametros.

  13. Is flow velocity a significant parameter in flood damage modelling?

    Directory of Open Access Journals (Sweden)

    H. Kreibich

    2009-10-01

    Full Text Available Flow velocity is generally presumed to influence flood damage. However, this influence is hardly quantified and virtually no damage models take it into account. Therefore, the influences of flow velocity, water depth and combinations of these two impact parameters on various types of flood damage were investigated in five communities affected by the Elbe catchment flood in Germany in 2002. 2-D hydraulic models with high to medium spatial resolutions were used to calculate the impact parameters at the sites in which damage occurred. A significant influence of flow velocity on structural damage, particularly on roads, could be shown in contrast to a minor influence on monetary losses and business interruption. Forecasts of structural damage to road infrastructure should be based on flow velocity alone. The energy head is suggested as a suitable flood impact parameter for reliable forecasting of structural damage to residential buildings above a critical impact level of 2 m of energy head or water depth. However, general consideration of flow velocity in flood damage modelling, particularly for estimating monetary loss, cannot be recommended.

  14. A robust approach for the determination of Gurson model parameters

    Directory of Open Access Journals (Sweden)

    R. Sepe

    2016-07-01

    Full Text Available Among the most promising models introduced in recent years, with which it is possible to obtain very useful results for a better understanding of the physical phenomena involved in the macroscopic mechanism of crack propagation, the one proposed by Gurson and Tvergaard links the propagation of a crack to the nucleation, growth and coalescence of micro-voids, which is likely to connect the micromechanical characteristics of the component under examination to crack initiation and propagation up to a macroscopic scale. It must be pointed out that, even if the statistical character of some of the many physical parameters involved in the said model has been put in evidence, no serious attempt has been made insofar to link the corresponding statistic to the experimental and macroscopic results, as for example crack initiation time, material toughness, residual strength of the cracked component (R-Curve, and so on. In this work, such an analysis was carried out in a twofold way: the former concerned the study of the influence exerted by each of the physical parameters on the material toughness, and the latter concerned the use of the Stochastic Design Improvement (SDI technique to perform a “robust” numerical calibration of the model evaluating the nominal values of the physical and correction parameters, which fit a particular experimental result even in the presence of their “natural” variability.

  15. The Impact of Three Factors on the Recovery of Item Parameters for the Three-Parameter Logistic Model

    Science.gov (United States)

    Kim, Kyung Yong; Lee, Won-Chan

    2017-01-01

    This article provides a detailed description of three factors (specification of the ability distribution, numerical integration, and frame of reference for the item parameter estimates) that might affect the item parameter estimation of the three-parameter logistic model, and compares five item calibration methods, which are combinations of the…

  16. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models

    KAUST Repository

    Kalligiannaki, Evangelia

    2015-01-07

    We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics and finding the optimal parameter set for which the relative entropy rate with respect to the atomistic dynamics is minimized. The minimization problem leads to a generalization of the force matching methods to non equilibrium systems. A multiplicative noise example reveals the importance of the diffusion coefficient in the optimization problem.

  17. Nonlocal order parameters for the 1D Hubbard model.

    Science.gov (United States)

    Montorsi, Arianna; Roncaglia, Marco

    2012-12-07

    We characterize the Mott-insulator and Luther-Emery phases of the 1D Hubbard model through correlators that measure the parity of spin and charge strings along the chain. These nonlocal quantities order in the corresponding gapped phases and vanish at the critical point U(c)=0, thus configuring as hidden order parameters. The Mott insulator consists of bound doublon-holon pairs, which in the Luther-Emery phase turn into electron pairs with opposite spins, both unbinding at U(c). The behavior of the parity correlators is captured by an effective free spinless fermion model.

  18. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-06

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  19. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    Science.gov (United States)

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.

  20. Comparison of parameter estimation algorithms in hydrological modelling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2006-01-01

    Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well...... for these types of models, although at a more expensive computational cost. The main purpose of this study is to investigate the performance of a global and a local parameter optimization algorithm, respectively, the Shuffled Complex Evolution (SCE) algorithm and the gradient-based Gauss......-Marquardt-Levenberg algorithm (implemented in the PEST software), when applied to a steady-state and a transient groundwater model. The results show that PEST can have severe problems in locating the global optimum and in being trapped in local regions of attractions. The global SCE procedure is, in general, more effective...

  1. Finding the effective parameter perturbations in atmospheric models: the LORENZ63 model as case study

    NARCIS (Netherlands)

    Moolenaar, H.E.; Selten, F.M.

    2004-01-01

    Climate models contain numerous parameters for which the numeric values are uncertain. In the context of climate simulation and prediction, a relevant question is what range of climate outcomes is possible given the range of parameter uncertainties. Which parameter perturbation changes the climate i

  2. Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models

    Science.gov (United States)

    Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea

    2014-05-01

    Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.

  3. Development of a GIA (Glacial Isostatic Adjustment) - Fault Model of Greenland

    Science.gov (United States)

    Steffen, R.; Lund, B.

    2015-12-01

    The increase in sea level due to climate change is an intensely discussed phenomenon, while less attention is being paid to the change in earthquake activity that may accompany disappearing ice masses. The melting of the Greenland Ice Sheet, for example, induces changes in the crustal stress field, which could result in the activation of existing faults and the generation of destructive earthquakes. Such glacially induced earthquakes are known to have occurred in Fennoscandia 10,000 years ago. Within a new project ("Glacially induced earthquakes in Greenland", start in October 2015), we will analyse the potential for glacially induced earthquakes in Greenland due to the ongoing melting. The objectives include the development of a three-dimensional (3D) subsurface model of Greenland, which is based on geologic, geophysical and geodetic datasets, and which also fulfils the boundary conditions of glacial isostatic adjustment (GIA) modelling. Here we will present an overview of the project, including the most recently available datasets and the methodologies needed for model construction and the simulation of GIA induced earthquakes.

  4. Towards individualized dose constraints: Adjusting the QUANTEC radiation pneumonitis model for clinical risk factors

    DEFF Research Database (Denmark)

    Appelt, Ane L; Vogelius, Ivan R.; Farr, Katherina P.;

    2014-01-01

    Background. Understanding the dose-response of the lung in order to minimize the risk of radiation pneumonitis (RP) is critical for optimization of lung cancer radiotherapy. We propose a method to combine the dose-response relationship for RP in the landmark QUANTEC paper with known clinical risk...... factors, in order to enable individual risk prediction. The approach is validated in an independent dataset. Material and methods. The prevalence of risk factors in the patient populations underlying the QUANTEC analysis was estimated, and a previously published method to adjust dose......-response relationships for clinical risk factors was employed. Effect size estimates (odds ratios) for risk factors were drawn from a recently published meta-analysis. Baseline values for D50 and γ50 were found. The method was tested in an independent dataset (103 patients), comparing the predictive power of the dose......-only QUANTEC model and the model including risk factors. Subdistribution cumulative incidence functions were compared for patients with high/low-risk predictions from the two models, and concordance indices (c-indices) for the prediction of RP were calculated. Results. The reference dose- response relationship...

  5. Order-parameter model for unstable multilane traffic flow

    Science.gov (United States)

    Lubashevsky; Mahnke

    2000-11-01

    We discuss a phenomenological approach to the description of unstable vehicle motion on multilane highways that explains in a simple way the observed sequence of the "free flow synchronized mode jam" phase transitions as well as the hysteresis in these transitions. We introduce a variable called an order parameter that accounts for possible correlations in the vehicle motion at different lanes. So, it is principally due to the "many-body" effects in the car interaction in contrast to such variables as the mean car density and velocity being actually the zeroth and first moments of the "one-particle" distribution function. Therefore, we regard the order parameter as an additional independent state variable of traffic flow. We assume that these correlations are due to a small group of "fast" drivers and by taking into account the general properties of the driver behavior we formulate a governing equation for the order parameter. In this context we analyze the instability of homogeneous traffic flow that manifested itself in the above-mentioned phase transitions and gave rise to the hysteresis in both of them. Besides, the jam is characterized by the vehicle flows at different lanes which are independent of one another. We specify a certain simplified model in order to study the general features of the car cluster self-formation under the "free flow synchronized motion" phase transition. In particular, we show that the main local parameters of the developed cluster are determined by the state characteristics of vehicle motion only.

  6. Accelerated gravitational wave parameter estimation with reduced order modeling.

    Science.gov (United States)

    Canizares, Priscilla; Field, Scott E; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel

    2015-02-20

    Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ∼30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ∼70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable.

  7. Global parameter estimation of the Cochlodinium polykrikoides model using bioassay data

    Institute of Scientific and Technical Information of China (English)

    CHO Hong-Yeon; PARK Kwang-Soon; KIM Sung

    2016-01-01

    Cochlodinium polykrikoides is a notoriously harmful algal species that inflicts severe damage on the aquacultures of the coastal seas of Korea and Japan. Information on their expected movement tracks and boundaries of influence is very useful and important for the effective establishment of a reduction plan. In general, the information is supported by a red-tide (a.k.a algal bloom) model. The performance of the model is highly dependent on the accuracy of parameters, which are the coefficients of functions approximating the biological growth and loss patterns of theC. polykrikoides. These parameters have been estimated using the bioassay data composed of growth-limiting factor and net growth rate value pairs. In the case of theC. polykrikoides, the parameters are different from each other in accordance with the used data because the bioassay data are sufficient compared to the other algal species. The parameters estimated by one specific dataset can be viewed as locally-optimized because they are adjusted only by that dataset. In cases where the other one data set is used, the estimation error might be considerable. In this study, the parameters are estimated by all available data sets without the use of only one specific data set and thus can be considered globally optimized. The cost function for the optimization is defined as the integrated mean squared estimation error, i.e., the difference between the values of the experimental and estimated rates. Based on quantitative error analysis, the root-mean squared errors of the global parameters show smaller values, approximately 25%–50%, than the values of the local parameters. In addition, bias is removed completely in the case of the globally estimated parameters. The parameter sets can be used as the reference default values of a red-tide model because they are optimal and representative. However, additional tuning of the parameters using thein-situ monitoring data is highly required. As opposed to the bioassay

  8. Optimal vibration control of curved beams using distributed parameter models

    Science.gov (United States)

    Liu, Fushou; Jin, Dongping; Wen, Hao

    2016-12-01

    The design of linear quadratic optimal controller using spectral factorization method is studied for vibration suppression of curved beam structures modeled as distributed parameter models. The equations of motion for active control of the in-plane vibration of a curved beam are developed firstly considering its shear deformation and rotary inertia, and then the state space model of the curved beam is established directly using the partial differential equations of motion. The functional gains for the distributed parameter model of curved beam are calculated by extending the spectral factorization method. Moreover, the response of the closed-loop control system is derived explicitly in frequency domain. Finally, the suppression of the vibration at the free end of a cantilevered curved beam by point control moment is studied through numerical case studies, in which the benefit of the presented method is shown by comparison with a constant gain velocity feedback control law, and the performance of the presented method on avoidance of control spillover is demonstrated.

  9. Parameter and Process Significance in Mechanistic Modeling of Cellulose Hydrolysis

    Science.gov (United States)

    Rotter, B.; Barry, A.; Gerhard, J.; Small, J.; Tahar, B.

    2005-12-01

    The rate of cellulose hydrolysis, and of associated microbial processes, is important in determining the stability of landfills and their potential impact on the environment, as well as associated time scales. To permit further exploration in this field, a process-based model of cellulose hydrolysis was developed. The model, which is relevant to both landfill and anaerobic digesters, includes a novel approach to biomass transfer between a cellulose-bound biofilm and biomass in the surrounding liquid. Model results highlight the significance of the bacterial colonization of cellulose particles by attachment through contact in solution. Simulations revealed that enhanced colonization, and therefore cellulose degradation, was associated with reduced cellulose particle size, higher biomass populations in solution, and increased cellulose-binding ability of the biomass. A sensitivity analysis of the system parameters revealed different sensitivities to model parameters for a typical landfill scenario versus that for an anaerobic digester. The results indicate that relative surface area of cellulose and proximity of hydrolyzing bacteria are key factors determining the cellulose degradation rate.

  10. A covariate-adjustment regression model approach to noninferiority margin definition.

    Science.gov (United States)

    Nie, Lei; Soon, Guoxing

    2010-05-10

    To maintain the interpretability of the effect of experimental treatment (EXP) obtained from a noninferiority trial, current statistical approaches often require the constancy assumption. This assumption typically requires that the control treatment effect in the population of the active control trial is the same as its effect presented in the population of the historical trial. To prevent constancy assumption violation, clinical trial sponsors were recommended to make sure that the design of the active control trial is as close to the design of the historical trial as possible. However, these rigorous requirements are rarely fulfilled in practice. The inevitable discrepancies between the historical trial and the active control trial have led to debates on many controversial issues. Without support from a well-developed quantitative method to determine the impact of the discrepancies on the constancy assumption violation, a correct judgment seems difficult. In this paper, we present a covariate-adjustment generalized linear regression model approach to achieve two goals: (1) to quantify the impact of population difference between the historical trial and the active control trial on the degree of constancy assumption violation and (2) to redefine the active control treatment effect in the active control trial population if the quantification suggests an unacceptable violation. Through achieving goal (1), we examine whether or not a population difference leads to an unacceptable violation. Through achieving goal (2), we redefine the noninferiority margin if the violation is unacceptable. This approach allows us to correctly determine the effect of EXP in the noninferiority trial population when constancy assumption is violated due to the population difference. We illustrate the covariate-adjustment approach through a case study.

  11. Parameter estimation for models of ligninolytic and cellulolytic enzyme kinetics

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Gangsheng [ORNL; Post, Wilfred M [ORNL; Mayes, Melanie [ORNL; Frerichs, Joshua T [ORNL; Jagadamma, Sindhu [ORNL

    2012-01-01

    While soil enzymes have been explicitly included in the soil organic carbon (SOC) decomposition models, there is a serious lack of suitable data for model parameterization. This study provides well-documented enzymatic parameters for application in enzyme-driven SOC decomposition models from a compilation and analysis of published measurements. In particular, we developed appropriate kinetic parameters for five typical ligninolytic and cellulolytic enzymes ( -glucosidase, cellobiohydrolase, endo-glucanase, peroxidase, and phenol oxidase). The kinetic parameters included the maximum specific enzyme activity (Vmax) and half-saturation constant (Km) in the Michaelis-Menten equation. The activation energy (Ea) and the pH optimum and sensitivity (pHopt and pHsen) were also analyzed. pHsen was estimated by fitting an exponential-quadratic function. The Vmax values, often presented in different units under various measurement conditions, were converted into the same units at a reference temperature (20 C) and pHopt. Major conclusions are: (i) Both Vmax and Km were log-normal distributed, with no significant difference in Vmax exhibited between enzymes originating from bacteria or fungi. (ii) No significant difference in Vmax was found between cellulases and ligninases; however, there was significant difference in Km between them. (iii) Ligninases had higher Ea values and lower pHopt than cellulases; average ratio of pHsen to pHopt ranged 0.3 0.4 for the five enzymes, which means that an increase or decrease of 1.1 1.7 pH units from pHopt would reduce Vmax by 50%. (iv) Our analysis indicated that the Vmax values from lab measurements with purified enzymes were 1 2 orders of magnitude higher than those for use in SOC decomposition models under field conditions.

  12. Modeling of state parameter and hardening function for granular materials

    Institute of Scientific and Technical Information of China (English)

    彭芳乐; 李建中

    2004-01-01

    A modified plastic strain energy as hardening state parameter for dense sand was proposed, based on the results from a series of drained plane strain tests on saturated dense Japanese Toyoura sand with precise stress and strain measurements along many stress paths. In addition, a unique hardening function between the plastic strain energy and the instantaneous stress path was also presented, which was independent of stress history. The proposed state parameter and hardening function was directly verified by the simple numerical integration method. It is shown that the proposed hardening function is independent of stress history and stress path and is appropriate to be used as the hardening rule in constitutive modeling for dense sand, and it is also capable of simulating the effects on the deformation characteristics of stress history and stress path for dense sand.

  13. Parameter Estimation of the Extended Vasiček Model

    Directory of Open Access Journals (Sweden)

    Sanae RUJIVAN

    2010-01-01

    Full Text Available In this paper, an estimate of the drift and diffusion parameters of the extended Vasiček model is presented. The estimate is based on the method of maximum likelihood. We derive a closed-form expansion for the transition (probability density of the extended Vasiček process and use the expansion to construct an approximate log-likelihood function of a discretely sampled data of the process. Approximate maximum likelihood estimators (AMLEs of the parameters are obtained by maximizing the approximate log-likelihood function. The convergence of the AMLEs to the true maximum likelihood estimators is obtained by increasing the number of terms in the expansions with a small time step size.

  14. Modeling and test on height adjustment system of electrically-controlled air suspension for agricultural vehicles

    National Research Council Canada - National Science Library

    Chen Yuexia; Chen Long; Wang Ruochen; Xu Xing; Shen Yujie; Liu Yanling

    2016-01-01

      To reduce the damages of pavement, vehicle components and agricultural product during transportation, an electric control air suspension height adjustment system of agricultural transport vehicle...

  15. Computational fluid dynamics model of WTP clearwell: Evaluation of critical parameters influencing model performance

    Energy Technology Data Exchange (ETDEWEB)

    Ducoste, J.; Brauer, R.

    1999-07-01

    Analysis of a computational fluid dynamics (CFD) model for a water treatment plant clearwell was done. Model parameters were analyzed to determine their influence on the effluent-residence time distribution (RTD) function. The study revealed that several model parameters could have significant impact on the shape of the RTD function and consequently raise the level of uncertainty on accurate predictions of clearwell hydraulics. The study also revealed that although the modeler could select a distribution of values for some of the model parameters, most of these values can be ruled out by requiring the difference between the calculated and theoretical hydraulic retention time to within 5% of the theoretical value.

  16. Set up of a method for the adjustment of resonance parameters on integral experiments; Mise au point d`une methode d`ajustement des parametres de resonance sur des experiences integrales

    Energy Technology Data Exchange (ETDEWEB)

    Blaise, P.

    1996-12-18

    Resonance parameters for actinides play a significant role in the neutronic characteristics of all reactor types. All the major integral parameters strongly depend on the nuclear data of the isotopes in the resonance-energy regions.The author sets up a method for the adjustment of resonance parameters taking into account the self-shielding effects and restricting the cross section deconvolution problem to a limited energy region. (N.T.).

  17. Models of traumatic experiences and children's psychological adjustment: the roles of perceived parenting and the children's own resources and activity.

    Science.gov (United States)

    Punamäki, R L; Qouta, S; el Sarraj, E

    1997-08-01

    The relations between traumatic events, perceived parenting styles, children's resources, political activity, and psychological adjustment were examined among 108 Palestinian boys and girls of 11-12 years of age. The results showed that exposure to traumatic events increased psychological adjustment problems directly and via 2 mediating paths. First, the more traumatic events children had experienced, the more negative parenting they experienced. And, the poorer they perceived parenting, the more they suffered from high neuroticism and low self-esteem. Second, the more traumatic events children had experienced, the more political activity they showed, and the more active they were, the more they suffered from psychological adjustment problems. Good perceived parenting protected children's psychological adjustment by making them less vulnerable in two ways. First, traumatic events decreased their intellectual, creative, and cognitive resources, and a lack of resources predicted many psychological adjustment problems in a model excluding perceived parenting. Second, political activity increased psychological adjustment problems in the same model, but not in the model including good parenting.

  18. Nonlinear relative-proportion-based route adjustment process for day-to-day traffic dynamics: modeling, equilibrium and stability analysis

    Science.gov (United States)

    Zhu, Wenlong; Ma, Shoufeng; Tian, Junfang; Li, Geng

    2016-11-01

    Travelers' route adjustment behaviors in a congested road traffic network are acknowledged as a dynamic game process between them. Existing Proportional-Switch Adjustment Process (PSAP) models have been extensively investigated to characterize travelers' route choice behaviors; PSAP has concise structure and intuitive behavior rule. Unfortunately most of which have some limitations, i.e., the flow over adjustment problem for the discrete PSAP model, the absolute cost differences route adjustment problem, etc. This paper proposes a relative-Proportion-based Route Adjustment Process (rePRAP) maintains the advantages of PSAP and overcomes these limitations. The rePRAP describes the situation that travelers on higher cost route switch to those with lower cost at the rate that is unilaterally depended on the relative cost differences between higher cost route and its alternatives. It is verified to be consistent with the principle of the rational behavior adjustment process. The equivalence among user equilibrium, stationary path flow pattern and stationary link flow pattern is established, which can be applied to judge whether a given network traffic flow has reached UE or not by detecting the stationary or non-stationary state of link flow pattern. The stability theorem is proved by the Lyapunov function approach. A simple example is tested to demonstrate the effectiveness of the rePRAP model.

  19. Strong parameter renormalization from optimum lattice model orbitals

    Science.gov (United States)

    Brosco, Valentina; Ying, Zu-Jian; Lorenzana, José

    2017-01-01

    Which is the best single-particle basis to express a Hubbard-like lattice model? A rigorous variational answer to this question leads to equations the solution of which depends in a self-consistent manner on the lattice ground state. Contrary to naive expectations, for arbitrary small interactions, the optimized orbitals differ from the noninteracting ones, leading also to substantial changes in the model parameters as shown analytically and in an explicit numerical solution for a simple double-well one-dimensional case. At strong coupling, we obtain the direct exchange interaction with a very large renormalization with important consequences for the explanation of ferromagnetism with model Hamiltonians. Moreover, in the case of two atoms and two fermions we show that the optimization equations are closely related to reduced density-matrix functional theory, thus establishing an unsuspected correspondence between continuum and lattice approaches.

  20. Multi-parameter models of innovation diffusion on complex networks

    CERN Document Server

    McCullen, Nicholas J; Bale, Catherine S E; Foxon, Tim J; Gale, William F

    2012-01-01

    A model, applicable to a range of innovation diffusion applications with a strong peer to peer component, is developed and studied, along with methods for its investigation and analysis. A particular application is to individual households deciding whether to install an energy efficiency measure in their home. The model represents these individuals as nodes on a network, each with a variable representing their current state of adoption of the innovation. The motivation to adopt is composed of three terms, representing personal preference, an average of each individual's network neighbours' states and a system average, which is a measure of the current social trend. The adoption state of a node changes if a weighted linear combination of these factors exceeds some threshold. Numerical simulations have been carried out, computing the average uptake after a sufficient number of time-steps over many realisations at a range of model parameter values, on various network topologies, including random (Erdos-Renyi), s...

  1. Reconstructing parameters of spreading models from partial observations

    CERN Document Server

    Lokhov, Andrey Y

    2016-01-01

    Spreading processes are often modelled as a stochastic dynamics occurring on top of a given network with edge weights corresponding to the transmission probabilities. Knowledge of veracious transmission probabilities is essential for prediction, optimization, and control of diffusion dynamics. Unfortunately, in most cases the transmission rates are unknown and need to be reconstructed from the spreading data. Moreover, in realistic settings it is impossible to monitor the state of each node at every time, and thus the data is highly incomplete. We introduce an efficient dynamic message-passing algorithm, which is able to reconstruct parameters of the spreading model given only partial information on the activation times of nodes in the network. The method is generalizable to a large class of dynamic models, as well to the case of temporal graphs.

  2. Dynamic systems models new methods of parameter and state estimation

    CERN Document Server

    2016-01-01

    This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...

  3. Connecting Global to Local Parameters in Barred Galaxy Models

    Indian Academy of Sciences (India)

    N. D. Caranicolas

    2002-09-01

    We present connections between global and local parameters in a realistic dynamical model, describing motion in a barred galaxy. Expanding the global model in the vicinity of a stable Lagrange point, we find the potential of a two-dimensional perturbed harmonic oscillator, which describes local motion near the centre of the global model. The frequencies of oscillations and the coefficients of the perturbing terms are not arbitrary but are connected to the mass, the angular rotation velocity, the scale length and the strength of the galactic bar. The local energy is also connected to the global energy. A comparison of the properties of orbits in the global and local potential is also made.

  4. Dynamic Stall Prediction of a Pitching Airfoil using an Adjusted Two-Equation URANS Turbulence Model

    Directory of Open Access Journals (Sweden)

    Galih Bangga

    2017-01-01

    Full Text Available The necessity in the analysis of dynamic stall becomes increasingly important due to its impact on many streamlined structures such as helicopter and wind turbine rotor blades. The present paper provides Computational Fluid Dynamics (CFD predictions of a pitching NACA 0012 airfoil at reduced frequency of 0.1 and at small Reynolds number value of 1.35e5. The simulations were carried out by adjusting the k − ε URANS turbulence model in order to damp the turbulence production in the near wall region. The damping factor was introduced as a function of wall distance in the buffer zone region. Parametric studies on the involving variables were conducted and the effect on the prediction capability was shown. The results were compared with available experimental data and CFD simulations using some selected two-equation turbulence models. An improvement of the lift coefficient prediction was shown even though the results still roughly mimic the experimental data. The flow development under the dynamic stall onset was investigated with regards to the effect of the leading and trailing edge vortices. Furthermore, the characteristics of the flow at several chords length downstream the airfoil were evaluated.

  5. Simple parameter estimation for complex models — Testing evolutionary techniques on 3-dimensional biogeochemical ocean models

    Science.gov (United States)

    Mattern, Jann Paul; Edwards, Christopher A.

    2017-01-01

    Parameter estimation is an important part of numerical modeling and often required when a coupled physical-biogeochemical ocean model is first deployed. However, 3-dimensional ocean model simulations are computationally expensive and models typically contain upwards of 10 parameters suitable for estimation. Hence, manual parameter tuning can be lengthy and cumbersome. Here, we present four easy to implement and flexible parameter estimation techniques and apply them to two 3-dimensional biogeochemical models of different complexities. Based on a Monte Carlo experiment, we first develop a cost function measuring the model-observation misfit based on multiple data types. The parameter estimation techniques are then applied and yield a substantial cost reduction over ∼ 100 simulations. Based on the outcome of multiple replicate experiments, they perform on average better than random, uninformed parameter search but performance declines when more than 40 parameters are estimated together. Our results emphasize the complex cost function structure for biogeochemical parameters and highlight dependencies between different parameters as well as different cost function formulations.

  6. Parameter sensitivity in satellite-gravity-constrained geothermal modelling

    Science.gov (United States)

    Pastorutti, Alberto; Braitenberg, Carla

    2017-04-01

    The use of satellite gravity data in thermal structure estimates require identifying the factors that affect the gravity field and are related to the thermal characteristics of the lithosphere. We propose a set of forward-modelled synthetics, investigating the model response in terms of heat flow, temperature, and gravity effect at satellite altitude. The sensitivity analysis concerns the parameters involved, as heat production, thermal conductivity, density and their temperature dependence. We discuss the effect of the horizontal smoothing due to heat conduction, the superposition of the bulk thermal effect of near-surface processes (e.g. advection in ground-water and permeable faults, paleoclimatic effects, blanketing by sediments), and the out-of equilibrium conditions due to tectonic transients. All of them have the potential to distort the gravity-derived estimates.We find that the temperature-conductivity relationship has a small effect with respect to other parameter uncertainties on the modelled temperature depth variation, surface heat flow, thermal lithosphere thickness. We conclude that the global gravity is useful for geothermal studies.

  7. Optimization of Experimental Model Parameter Identification for Energy Storage Systems

    Directory of Open Access Journals (Sweden)

    Rosario Morello

    2013-09-01

    Full Text Available The smart grid approach is envisioned to take advantage of all available modern technologies in transforming the current power system to provide benefits to all stakeholders in the fields of efficient energy utilisation and of wide integration of renewable sources. Energy storage systems could help to solve some issues that stem from renewable energy usage in terms of stabilizing the intermittent energy production, power quality and power peak mitigation. With the integration of energy storage systems into the smart grids, their accurate modeling becomes a necessity, in order to gain robust real-time control on the network, in terms of stability and energy supply forecasting. In this framework, this paper proposes a procedure to identify the values of the battery model parameters in order to best fit experimental data and integrate it, along with models of energy sources and electrical loads, in a complete framework which represents a real time smart grid management system. The proposed method is based on a hybrid optimisation technique, which makes combined use of a stochastic and a deterministic algorithm, with low computational burden and can therefore be repeated over time in order to account for parameter variations due to the battery’s age and usage.

  8. Multiobjective Automatic Parameter Calibration of a Hydrological Model

    Directory of Open Access Journals (Sweden)

    Donghwi Jung

    2017-03-01

    Full Text Available This study proposes variable balancing approaches for the exploration (diversification and exploitation (intensification of the non-dominated sorting genetic algorithm-II (NSGA-II with simulated binary crossover (SBX and polynomial mutation (PM in the multiobjective automatic parameter calibration of a lumped hydrological model, the HYMOD model. Two objectives—minimizing the percent bias and minimizing three peak flow differences—are considered in the calibration of the six parameters of the model. The proposed balancing approaches, which migrate the focus between exploration and exploitation over generations by varying the crossover and mutation distribution indices of SBX and PM, respectively, are compared with traditional static balancing approaches (the two dices value is fixed during optimization in a benchmark hydrological calibration problem for the Leaf River (1950 km2 near Collins, Mississippi. Three performance metrics—solution quality, spacing, and convergence—are used to quantify and compare the quality of the Pareto solutions obtained by the two different balancing approaches. The variable balancing approaches that migrate the focus of exploration and exploitation differently for SBX and PM outperformed other methods.

  9. Constraints on the parameters of the Left Right Mirror Model

    CERN Document Server

    Cerón, V E; Díaz-Cruz, J L; Maya, M; Ceron, Victoria E.; Cotti, Umberto; Maya, Mario

    1998-01-01

    We study some phenomenological constraints on the parameters of a left right model with mirror fermions (LRMM) that solves the strong CP problem. In particular, we evaluate the contribution of mirror neutrinos to the invisible Z decay width (\\Gamma_Z^{inv}), and we find that the present experimental value on \\Gamma_Z^{inv}, can be used to place an upper bound on the Z-Z' mixing angle that is consistent with limits obtained previously from other low-energy observables. In this model the charged fermions that correspond to the standard model (SM) mix with its mirror counterparts. This mixing, simultaneously with the Z-Z' one, leads to modifications of the \\Gamma(Z --> f \\bar{f}) decay width. By comparing with LEP data, we obtain bounds on the standard-mirror lepton mixing angles. We also find that the bottom quark mixing parameters can be chosen to fit the experimental values of R_b, and the resulting values for the Z-Z' mixing angle do not agree with previous bounds. However, this disagreement disappears if on...

  10. Application of a free parameter model to plastic scintillation samples

    Energy Technology Data Exchange (ETDEWEB)

    Tarancon Sanz, Alex, E-mail: alex.tarancon@ub.edu [Departament de Quimica Analitica, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain); Kossert, Karsten, E-mail: Karsten.Kossert@ptb.de [Physikalisch-Technische Bundesanstalt (PTB), Bundesallee 100, 38116 Braunschweig (Germany)

    2011-08-21

    In liquid scintillation (LS) counting, the CIEMAT/NIST efficiency tracing method and the triple-to-double coincidence ratio (TDCR) method have proved their worth for reliable activity measurements of a number of radionuclides. In this paper, an extended approach to apply a free-parameter model to samples containing a mixture of solid plastic scintillation microspheres and radioactive aqueous solutions is presented. Several beta-emitting radionuclides were measured in a TDCR system at PTB. For the application of the free parameter model, the energy loss in the aqueous phase must be taken into account, since this portion of the particle energy does not contribute to the creation of scintillation light. The energy deposit in the aqueous phase is determined by means of Monte Carlo calculations applying the PENELOPE software package. To this end, great efforts were made to model the geometry of the samples. Finally, a new geometry parameter was defined, which was determined by means of a tracer radionuclide with known activity. This makes the analysis of experimental TDCR data of other radionuclides possible. The deviations between the determined activity concentrations and reference values were found to be lower than 3%. The outcome of this research work is also important for a better understanding of liquid scintillation counting. In particular the influence of (inverse) micelles, i.e. the aqueous spaces embedded in the organic scintillation cocktail, can be investigated. The new approach makes clear that it is important to take the energy loss in the aqueous phase into account. In particular for radionuclides emitting low-energy electrons (e.g. M-Auger electrons from {sup 125}I), this effect can be very important.

  11. Model parameters for representative wetland plant functional groups

    Science.gov (United States)

    Williams, Amber S.; Kiniry, James R.; Mushet, David M.; Smith, Loren M.; McMurry, Scott T.; Attebury, Kelly; Lang, Megan; McCarty, Gregory W.; Shaffer, Jill A.; Effland, William R.; Johnson, Mari-Vaughn V.

    2017-01-01

    Wetlands provide a wide variety of ecosystem services including water quality remediation, biodiversity refugia, groundwater recharge, and floodwater storage. Realistic estimation of ecosystem service benefits associated with wetlands requires reasonable simulation of the hydrology of each site and realistic simulation of the upland and wetland plant growth cycles. Objectives of this study were to quantify leaf area index (LAI), light extinction coefficient (k), and plant nitrogen (N), phosphorus (P), and potassium (K) concentrations in natural stands of representative plant species for some major plant functional groups in the United States. Functional groups in this study were based on these parameters and plant growth types to enable process-based modeling. We collected data at four locations representing some of the main wetland regions of the United States. At each site, we collected on-the-ground measurements of fraction of light intercepted, LAI, and dry matter within the 2013–2015 growing seasons. Maximum LAI and k variables showed noticeable variations among sites and years, while overall averages and functional group averages give useful estimates for multisite simulation modeling. Variation within each species gives an indication of what can be expected in such natural ecosystems. For P and K, the concentrations from highest to lowest were spikerush (Eleocharis macrostachya), reed canary grass (Phalaris arundinacea), smartweed (Polygonum spp.), cattail (Typha spp.), and hardstem bulrush (Schoenoplectus acutus). Spikerush had the highest N concentration, followed by smartweed, bulrush, reed canary grass, and then cattail. These parameters will be useful for the actual wetland species measured and for the wetland plant functional groups they represent. These parameters and the associated process-based models offer promise as valuable tools for evaluating environmental benefits of wetlands and for evaluating impacts of various agronomic practices in

  12. ADJUSTMENT FACTORS AND ADJUSTMENT STRUCTURE

    Institute of Scientific and Technical Information of China (English)

    Tao Benzao

    2003-01-01

    In this paper, adjustment factors J and R put forward by professor Zhou Jiangwen are introduced and the nature of the adjustment factors and their role in evaluating adjustment structure is discussed and proved.

  13. Parameter optimization in differential geometry based solvation models.

    Science.gov (United States)

    Wang, Bao; Wei, G W

    2015-10-01

    Differential geometry (DG) based solvation models are a new class of variational implicit solvent approaches that are able to avoid unphysical solvent-solute boundary definitions and associated geometric singularities, and dynamically couple polar and non-polar interactions in a self-consistent framework. Our earlier study indicates that DG based non-polar solvation model outperforms other methods in non-polar solvation energy predictions. However, the DG based full solvation model has not shown its superiority in solvation analysis, due to its difficulty in parametrization, which must ensure the stability of the solution of strongly coupled nonlinear Laplace-Beltrami and Poisson-Boltzmann equations. In this work, we introduce new parameter learning algorithms based on perturbation and convex optimization theories to stabilize the numerical solution and thus achieve an optimal parametrization of the DG based solvation models. An interesting feature of the present DG based solvation model is that it provides accurate solvation free energy predictions for both polar and non-polar molecules in a unified formulation. Extensive numerical experiment demonstrates that the present DG based solvation model delivers some of the most accurate predictions of the solvation free energies for a large number of molecules.

  14. Parameter Estimation in Stochastic Grey-Box Models

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2004-01-01

    An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation. The estimation scheme is based on the extended...... Kalman filter and features maximum likelihood as well as maximum a posteriori estimation on multiple independent data sets, including irregularly sampled data sets and data sets with occasional outliers and missing observations. The software implementation is compared to an existing software tool...

  15. Allowed Parameter Regions for a Tree-Level Inflation Model

    Institute of Scientific and Technical Information of China (English)

    MENG Xin-He

    2001-01-01

    The early universe inflation is well known as a promising theory to explain the origin of large-scale structure of universe and to solve the early universe pressing problems. For a reasonable inflation model, the potential during inflation must be very flat, at least, in the direction of the inflaton. To construct the inflaton potential all the known related astrophysics observations should be included. For a general tree-level hybrid inflation potential, which is notdiscussed fully so far, the parameters in it are shown how to be constrained via the astrophysics data observed and to be obtained to the expected accuracy, and to be consistent with cosmology requirements.``

  16. Empirically modelled Pc3 activity based on solar wind parameters

    Directory of Open Access Journals (Sweden)

    T. Raita

    2010-09-01

    Full Text Available It is known that under certain solar wind (SW/interplanetary magnetic field (IMF conditions (e.g. high SW speed, low cone angle the occurrence of ground-level Pc3–4 pulsations is more likely. In this paper we demonstrate that in the event of anomalously low SW particle density, Pc3 activity is extremely low regardless of otherwise favourable SW speed and cone angle. We re-investigate the SW control of Pc3 pulsation activity through a statistical analysis and two empirical models with emphasis on the influence of SW density on Pc3 activity. We utilise SW and IMF measurements from the OMNI project and ground-based magnetometer measurements from the MM100 array to relate SW and IMF measurements to the occurrence of Pc3 activity. Multiple linear regression and artificial neural network models are used in iterative processes in order to identify sets of SW-based input parameters, which optimally reproduce a set of Pc3 activity data. The inclusion of SW density in the parameter set significantly improves the models. Not only the density itself, but other density related parameters, such as the dynamic pressure of the SW, or the standoff distance of the magnetopause work equally well in the model. The disappearance of Pc3s during low-density events can have at least four reasons according to the existing upstream wave theory: 1. Pausing the ion-cyclotron resonance that generates the upstream ultra low frequency waves in the absence of protons, 2. Weakening of the bow shock that implies less efficient reflection, 3. The SW becomes sub-Alfvénic and hence it is not able to sweep back the waves propagating upstream with the Alfvén-speed, and 4. The increase of the standoff distance of the magnetopause (and of the bow shock. Although the models cannot account for the lack of Pc3s during intervals when the SW density is extremely low, the resulting sets of optimal model inputs support the generation of mid latitude Pc3 activity predominantly through

  17. Assessing climate change effects on long-term forest development: adjusting growth, phenology, and seed production in a gap model

    NARCIS (Netherlands)

    Meer, van der P.J.; Jorritsma, I.T.M.; Kramer, K.

    2002-01-01

    The sensitivity of forest development to climate change is assessed using a gap model. Process descriptions in the gap model of growth, phenology, and seed production were adjusted for climate change effects using a detailed process-based growth modeland a regression analysis. Simulation runs over 4

  18. Assessing climate change effects on long-term forest development: adjusting growth, phenology, and seed production in a gap model

    NARCIS (Netherlands)

    Meer, van der P.J.; Jorritsma, I.T.M.; Kramer, K.

    2002-01-01

    The sensitivity of forest development to climate change is assessed using a gap model. Process descriptions in the gap model of growth, phenology, and seed production were adjusted for climate change effects using a detailed process-based growth modeland a regression analysis. Simulation runs over 4

  19. Modelling of bio-optical parameters of open ocean waters

    Directory of Open Access Journals (Sweden)

    Vadim N. Pelevin

    2001-12-01

    Full Text Available An original method for estimating the concentration of chlorophyll pigments, absorption of yellow substance and absorption of suspended matter without pigments and yellow substance in detritus using spectral diffuse attenuation coefficient for downwelling irradiance and irradiance reflectance data has been applied to sea waters of different types in the open ocean (case 1. Using the effective numerical single parameter classification with the water type optical index m as a parameter over the whole range of the open ocean waters, the calculations have been carried out and the light absorption spectra of sea waters tabulated. These spectra are used to optimize the absorption models and thus to estimate the concentrations of the main admixtures in sea water. The value of m can be determined from direct measurements of the downward irradiance attenuation coefficient at 500 nm or calculated from remote sensing data using the regressions given in the article. The sea water composition can then be readily estimated from the tables given for any open ocean area if that one parameter m characterizing the basin is known.

  20. Convexity Adjustments

    DEFF Research Database (Denmark)

    M. Gaspar, Raquel; Murgoci, Agatha

    2010-01-01

    of particular importance to practitioners: yield convexity adjustments, forward versus futures convexity adjustments, timing and quanto convexity adjustments. We claim that the appropriate way to look into any of these adjustments is as a side effect of a measure change, as proposed by Pelsser (2003...