WorldWideScience

Sample records for model parameter adjustments

  1. An approach to adjustment of relativistic mean field model parameters

    Directory of Open Access Journals (Sweden)

    Bayram Tuncay

    2017-01-01

    Full Text Available The Relativistic Mean Field (RMF model with a small number of adjusted parameters is powerful tool for correct predictions of various ground-state nuclear properties of nuclei. Its success for describing nuclear properties of nuclei is directly related with adjustment of its parameters by using experimental data. In the present study, the Artificial Neural Network (ANN method which mimics brain functionality has been employed for improvement of the RMF model parameters. In particular, the understanding capability of the ANN method for relations between the RMF model parameters and their predictions for binding energies (BEs of 58Ni and 208Pb have been found in agreement with the literature values.

  2. A metallic solution model with adjustable parameter for describing ternary thermodynamic properties from its binary constituents

    International Nuclear Information System (INIS)

    Fang Zheng; Qiu Guanzhou

    2007-01-01

    A metallic solution model with adjustable parameter k has been developed to predict thermodynamic properties of ternary systems from those of its constituent three binaries. In the present model, the excess Gibbs free energy for a ternary mixture is expressed as a weighted probability sum of those of binaries and the k value is determined based on an assumption that the ternary interaction generally strengthens the mixing effects for metallic solutions with weak interaction, making the Gibbs free energy of mixing of the ternary system more negative than that before considering the interaction. This point is never considered in the models currently reported, where the only difference in a geometrical definition of molar values of components is considered that do not involve thermodynamic principles but are completely empirical. The current model describes the results of experiments very well, and by adjusting the k value also agrees with those from models used widely in the literature. Three ternary systems, Mg-Cu-Ni, Zn-In-Cd, and Cd-Bi-Pb are recalculated to demonstrate the method of determining k and the precision of the model. The results of the calculations, especially those in Mg-Cu-Ni system, are better than those predicted by the current models in the literature

  3. A finite element model updating technique for adjustment of parameters near boundaries

    Science.gov (United States)

    Gwinn, Allen Fort, Jr.

    Even though there have been many advances in research related to methods of updating finite element models based on measured normal mode vibration characteristics, there is yet to be a widely accepted method that works reliably with a wide range of problems. This dissertation focuses on the specific class of problems having to do with changes in stiffness near the clamped boundary of plate structures. This class of problems is especially important as it relates to the performance of turbine engine blades, where a change in stiffness at the base of the blade can be indicative of structural damage. The method that is presented herein is a new technique for resolving the differences between the physical structure and the finite element model. It is a semi-iterative technique that incorporates a "physical expansion" of the measured eigenvectors along with appropriate scaling of these expanded eigenvectors into an iterative loop that uses the Engel's model modification method to then calculate adjusted stiffness parameters for the finite element model. Three example problems are presented that use eigenvalues and mass normalized eigenvectors that have been calculated from experimentally obtained accelerometer readings. The test articles that were used were all thin plates with one edge fully clamped. They each had a cantilevered length of 8.5 inches and a width of 4 inches. The three plates differed from one another in thickness from 0.100 inches to 0.188 inches. These dimensions were selected in order to approximate a gas turbine engine blade. The semi-iterative modification technique is shown to do an excellent job of calculating the necessary adjustments to the finite element model so that the analytically determined eigenvalues and eigenvectors for the adjusted model match the corresponding values from the experimental data with good agreement. Furthermore, the semi-iterative method is quite robust. For the examples presented here, the method consistently converged

  4. Adjustments of the TaD electron density reconstruction model with GNSS-TEC parameters for operational application purposes

    Directory of Open Access Journals (Sweden)

    Belehaki Anna

    2012-12-01

    Full Text Available Validation results on the latest version of TaD model (TaDv2 show realistic reconstruction of the electron density profiles (EDPs with an average error of 3 TECU, similar to the error obtained from GNSS-TEC calculated paremeters. The work presented here has the aim to further improve the accuracy of the TaD topside reconstruction, adjusting the TEC parameter calculated from TaD model with the TEC parameter calculated by GNSS transmitting RINEX files provided by receivers co-located with the Digisondes. The performance of the new version is tested during a storm period demonstrating further improvements in respect to the previous version. Statistical comparison of modeled and observed TEC confirms the validity of the proposed adjustment. A significant benefit of the proposed upgrade is that it facilitates the real-time implementation of TaD. The model needs a reliable measure of the scale height at the peak height, which is supposed to be provided by Digisondes. Oftenly, the automatic scaling software fails to correctly calculate the scale height at the peak, Hm, due to interferences in the receiving signal. Consequently the model estimated topside scale height is wrongly calculated leading to unrealistic results for the modeled EDP. The proposed TEC adjustment forces the model to correctly reproduce the topside scale height, despite the inaccurate values of Hm. This adjustment is very important for the application of TaD in an operational environment.

  5. Modeling and simulation of M/M/c queuing pharmacy system with adjustable parameters

    Science.gov (United States)

    Rashida, A. R.; Fadzli, Mohammad; Ibrahim, Safwati; Goh, Siti Rohana

    2016-02-01

    This paper studies a discrete event simulation (DES) as a computer based modelling that imitates a real system of pharmacy unit. M/M/c queuing theo is used to model and analyse the characteristic of queuing system at the pharmacy unit of Hospital Tuanku Fauziah, Kangar in Perlis, Malaysia. The input of this model is based on statistical data collected for 20 working days in June 2014. Currently, patient waiting time of pharmacy unit is more than 15 minutes. The actual operation of the pharmacy unit is a mixed queuing server with M/M/2 queuing model where the pharmacist is referred as the server parameters. DES approach and ProModel simulation software is used to simulate the queuing model and to propose the improvement for queuing system at this pharmacy system. Waiting time for each server is analysed and found out that Counter 3 and 4 has the highest waiting time which is 16.98 and 16.73 minutes. Three scenarios; M/M/3, M/M/4 and M/M/5 are simulated and waiting time for actual queuing model and experimental queuing model are compared. The simulation results show that by adding the server (pharmacist), it will reduce patient waiting time to a reasonable improvement. Almost 50% average patient waiting time is reduced when one pharmacist is added to the counter. However, it is not necessary to fully utilize all counters because eventhough M/M/4 and M/M/5 produced more reduction in patient waiting time, but it is ineffective since Counter 5 is rarely used.

  6. Improving the Process of Adjusting the Parameters of Finite Element Models of Healthy Human Intervertebral Discs by the Multi-Response Surface Method

    Science.gov (United States)

    Somovilla Gómez, Fátima

    2017-01-01

    The kinematic behavior of models that are based on the finite element method (FEM) for modeling the human body depends greatly on an accurate estimate of the parameters that define such models. This task is complex, and any small difference between the actual biomaterial model and the simulation model based on FEM can be amplified enormously in the presence of nonlinearities. The current paper attempts to demonstrate how a combination of the FEM and the MRS methods with desirability functions can be used to obtain the material parameters that are most appropriate for use in defining the behavior of Finite Element (FE) models of the healthy human lumbar intervertebral disc (IVD). The FE model parameters were adjusted on the basis of experimental data from selected standard tests (compression, flexion, extension, shear, lateral bending, and torsion) and were developed as follows: First, three-dimensional parameterized FE models were generated on the basis of the mentioned standard tests. Then, 11 parameters were selected to define the proposed parameterized FE models. For each of the standard tests, regression models were generated using MRS to model the six stiffness and nine bulges of the healthy IVD models that were created by changing the parameters of the FE models. The optimal combination of the 11 parameters was based on three different adjustment criteria. The latter, in turn, were based on the combination of stiffness and bulges that were obtained from the standard test FE simulations. The first adjustment criteria considered stiffness and bulges to be equally important in the adjustment of FE model parameters. The second adjustment criteria considered stiffness as most important, whereas the third considered the bulges to be most important. The proposed adjustment methods were applied to a medium-sized human IVD that corresponded to the L3–L4 lumbar level with standard dimensions of width = 50 mm, depth = 35 mm, and height = 10 mm. Agreement between the

  7. Learning-parameter adjustment in neural networks

    Science.gov (United States)

    Heskes, Tom M.; Kappen, Bert

    1992-06-01

    We present a learning-parameter adjustment algorithm, valid for a large class of learning rules in neural-network literature. The algorithm follows directly from a consideration of the statistics of the weights in the network. The characteristic behavior of the algorithm is calculated, both in a fixed and a changing environment. A simple example, Widrow-Hoff learning for statistical classification, serves as an illustration.

  8. Enhancing Global Land Surface Hydrology Estimates from the NASA MERRA Reanalysis Using Precipitation Observations and Model Parameter Adjustments

    Science.gov (United States)

    Reichle, Rolf; Koster, Randal; DeLannoy, Gabrielle; Forman, Barton; Liu, Qing; Mahanama, Sarith; Toure, Ally

    2011-01-01

    The Modern-Era Retrospective analysis for Research and Applications (MERRA) is a state-of-the-art reanalysis that provides. in addition to atmospheric fields. global estimates of soil moisture, latent heat flux. snow. and runoff for J 979-present. This study introduces a supplemental and improved set of land surface hydrological fields ('MERRA-Land') generated by replaying a revised version of the land component of the MERRA system. Specifically. the MERRA-Land estimates benefit from corrections to the precipitation forcing with the Global Precipitation Climatology Project pentad product (version 2.1) and from revised parameters in the rainfall interception model, changes that effectively correct for known limitations in the MERRA land surface meteorological forcings. The skill (defined as the correlation coefficient of the anomaly time series) in land surface hydrological fields from MERRA and MERRA-Land is assessed here against observations and compared to the skill of the state-of-the-art ERA-Interim reanalysis. MERRA-Land and ERA-Interim root zone soil moisture skills (against in situ observations at 85 US stations) are comparable and significantly greater than that of MERRA. Throughout the northern hemisphere, MERRA and MERRA-Land agree reasonably well with in situ snow depth measurements (from 583 stations) and with snow water equivalent from an independent analysis. Runoff skill (against naturalized stream flow observations from 15 basins in the western US) of MERRA and MERRA-Land is typically higher than that of ERA-Interim. With a few exceptions. the MERRA-Land data appear more accurate than the original MERRA estimates and are thus recommended for those interested in using '\\-tERRA output for land surface hydrological studies.

  9. Convexity Adjustments for ATS Models

    DEFF Research Database (Denmark)

    Murgoci, Agatha; Gaspar, Raquel M.

    . As a result we classify convexity adjustments into forward adjustments and swaps adjustments. We, then, focus on affine term structure (ATS) models and, in this context, conjecture convexity adjustments should be related of affine functionals. In the case of forward adjustments, we show how to obtain exact...... formulas. Concretely for LIBOR in arrears (LIA) contracts, we derive the system of Riccatti ODE-s one needs to compute to obtain the exact adjustment. Based upon the ideas of Schrager and Pelsser (2006) we are also able to derive general swap adjustments useful, in particular, when dealing with constant...

  10. Extendable linearised adjustment model for deformation analysis

    NARCIS (Netherlands)

    Hiddo Velsink

    2015-01-01

    Author supplied: "This paper gives a linearised adjustment model for the affine, similarity and congruence transformations in 3D that is easily extendable with other parameters to describe deformations. The model considers all coordinates stochastic. Full positive semi-definite covariance matrices

  11. Extendable linearised adjustment model for deformation analysis

    NARCIS (Netherlands)

    Velsink, H.

    2015-01-01

    This paper gives a linearised adjustment model for the affine, similarity and congruence transformations in 3D that is easily extendable with other parameters to describe deformations. The model considers all coordinates stochastic. Full positive semi-definite covariance matrices and correlation

  12. Nuclear data adjustment methodology utilizing resonance parameter sensitivities and uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Broadhead, B.L.

    1984-01-01

    This work presents the development and demonstration of a Nuclear Data Adjustment Method that allows inclusion of both energy and spatial self-shielding into the adjustment procedure. The resulting adjustments are for the basic parameters (i.e., resonance parameters) in the resonance regions and for the group cross sections elsewhere. The majority of this development effort concerns the production of resonance parameter sensitivity information which allows the linkage between the responses of interest and the basic parameters. The resonance parameter sensitivity methodology developed herein usually provides accurate results when compared to direct recalculations using existing and well-known cross section processing codes. However, it has been shown in several cases that self-shielded cross sections can be very non-linear functions of the basic parameters. For this reason caution must be used in any study which assumes that a linear relationship exists between a given self-shielded group cross section and its corresponding basic data parameters.

  13. Adjustment or updating of models

    Indian Academy of Sciences (India)

    Department of Mechanical Engineering, Imperial College of Science, .... It is first necessary to decide upon the level of accuracy, or correctness which is sought from the adjustment of the initial model, and this will be heavily influenced by the eventual application of the ..... reviewing the degree of success attained.

  14. Lumped-parameter models

    Energy Technology Data Exchange (ETDEWEB)

    Ibsen, Lars Bo; Liingaard, M.

    2006-12-15

    A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. In this technical report the steps of establishing a lumped-parameter model are presented. Following sections are included in this report: Static and dynamic formulation, Simple lumped-parameter models and Advanced lumped-parameter models. (au)

  15. Adjustment or updating of models

    Indian Academy of Sciences (India)

    While the model is defined in terms of these spatial parameters, .... (mode shapes defined at the n DOFs of a typical modal test in place of the complete N DOFs .... In these expressions,. N И the number of degrees of freedom in the model, while N1 and N2 are the numbers of mass and stiffness elements to be corrected ...

  16. Linking Item Response Model Parameters.

    Science.gov (United States)

    van der Linden, Wim J; Barrett, Michelle D

    2016-09-01

    With a few exceptions, the problem of linking item response model parameters from different item calibrations has been conceptualized as an instance of the problem of test equating scores on different test forms. This paper argues, however, that the use of item response models does not require any test score equating. Instead, it involves the necessity of parameter linking due to a fundamental problem inherent in the formal nature of these models-their general lack of identifiability. More specifically, item response model parameters need to be linked to adjust for the different effects of the identifiability restrictions used in separate item calibrations. Our main theorems characterize the formal nature of these linking functions for monotone, continuous response models, derive their specific shapes for different parameterizations of the 3PL model, and show how to identify them from the parameter values of the common items or persons in different linking designs.

  17. An iteratively reweighted least-squares approach to adaptive robust adjustment of parameters in linear regression models with autoregressive and t-distributed deviations

    Science.gov (United States)

    Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza

    2018-03-01

    In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.

  18. Adjusting parameters of aortic valve stenosis severity by body size

    DEFF Research Database (Denmark)

    Minners, Jan; Gohlke-Baerwolf, Christa; Kaufmann, Beat A

    2014-01-01

    BACKGROUND: Adjustment of cardiac dimensions by measures of body size appears intuitively convincing and in patients with aortic stenosis, aortic valve area (AVA) is commonly adjusted by body surface area (BSA). However, there is little evidence to support such an approach. OBJECTIVE: To identify...... the adequate measure of body size for the adjustment of aortic stenosis severity. METHODS: Parameters of aortic stenosis severity (jet velocity, mean pressure gradient (MPG) and AVA) and measures of body size (height, weight, BSA and body mass index (BMI)) were analysed in 2843 consecutive patients with aortic...... stenosis (jet velocity ≥2.5 m/s) and related to outcomes in a second cohort of 1525 patients from the Simvastatin/Ezetimibe in Aortic Stenosis (SEAS) study. RESULTS: Whereas jet velocity and MPG were independent of body size, AVA was significantly correlated with height, weight, BSA and BMI (Pearson...

  19. Adjustment of the dynamic weight distribution as a sensitive parameter for diagnosis of postural alteration in a rodent model of vestibular deficit.

    Directory of Open Access Journals (Sweden)

    Brahim Tighilet

    Full Text Available Vestibular disorders, by inducing significant posturo-locomotor and cognitive disorders, can significantly impair the most basic tasks of everyday life. Their precise diagnosis is essential to implement appropriate therapeutic countermeasures. Monitoring their evolution is also very important to validate or, on the contrary, to adapt the undertaken therapeutic actions. To date, the diagnosis methods of posturo-locomotor impairments are restricted to examinations that most often lack sensitivity and precision. In the present work we studied the alterations of the dynamic weight distribution in a rodent model of sudden and complete unilateral vestibular loss. We used a system of force sensors connected to a data analysis system to quantify in real time and in an automated way the weight bearing of the animal on the ground. We show here that sudden, unilateral, complete and permanent loss of the vestibular inputs causes a severe alteration of the dynamic ground weight distribution of vestibulo lesioned rodents. Characteristics of alterations in the dynamic weight distribution vary over time and follow the sequence of appearance and disappearance of the various symptoms that compose the vestibular syndrome. This study reveals for the first time that dynamic weight bearing is a very sensitive parameter for evaluating posturo-locomotor function impairment. Associated with more classical vestibular examinations, this paradigm can considerably enrich the methods for assessing and monitoring vestibular disorders. Systematic application of this type of evaluation to the dizzy or unstable patient could improve the detection of vestibular deficits and allow predicting better their impact on posture and walk. Thus it could also allow a better follow-up of the therapeutic approaches for rehabilitating gait and balance.

  20. Response model parameter linking

    NARCIS (Netherlands)

    Barrett, M.L.D.

    2015-01-01

    With a few exceptions, the problem of linking item response model parameters from different item calibrations has been conceptualized as an instance of the problem of equating observed scores on different test forms. This thesis argues, however, that the use of item response models does not require

  1. Premium adjustment: actuarial analysis on epidemiological models ...

    African Journals Online (AJOL)

    Premium adjustment: actuarial analysis on epidemiological models. DEA Omorogbe, SO Edobor. Abstract. In this paper, we analyse insurance premium adjustment in the context of an epidemiological model where the insurer's future financial liability is greater than the premium from patients. In this situation, it becomes ...

  2. 40 CFR 91.112 - Requirement of certification-adjustable parameters.

    Science.gov (United States)

    2010-07-01

    ... adjustment in the physically available range. (b) An operating parameter is not considered adjustable if it... adjustable range during certification, production line testing, selective enforcement auditing or any in-use...

  3. Adjusting the Parameters of Metal Oxide Gapless Surge Arresters’ Equivalent Circuits Using the Harmony Search Method

    Directory of Open Access Journals (Sweden)

    Christos A. Christodoulou

    2017-12-01

    Full Text Available The appropriate circuit modeling of metal oxide gapless surge arresters is critical for insulation coordination studies. Metal oxide arresters present a dynamic behavior for fast front surges; namely, their residual voltage is dependent on the peak value, as well as the duration of the injected impulse current, and should therefore not only be represented by non-linear elements. The aim of the current work is to adjust the parameters of the most frequently used surge arresters’ circuit models by considering the magnitude of the residual voltage, as well as the dissipated energy for given pulses. In this aim, the harmony search method is implemented to adjust parameter values of the arrester equivalent circuit models. This functions by minimizing a defined objective function that compares the simulation outcomes with the manufacturer’s data and the results obtained from previous methodologies.

  4. Zoom lens calibration with zoom- and focus-related intrinsic parameters applied to bundle adjustment

    Science.gov (United States)

    Zheng, Shunyi; Wang, Zheng; Huang, Rongyong

    2015-04-01

    A zoom lens is more flexible for photogrammetric measurements under diverse environments than a fixed lens. However, challenges in calibration of zoom-lens cameras preclude the wide use of zoom lenses in the field of close-range photogrammetry. Thus, a novel zoom lens calibration method is proposed in this study. In this method, instead of conducting modeling after monofocal calibrations, we summarize the empirical zoom/focus models of intrinsic parameters first and then incorporate these parameters into traditional collinearity equations to construct the fundamental mathematical model, i.e., collinearity equations with zoom- and focus-related intrinsic parameters. Similar to monofocal calibration, images taken at several combinations of zoom and focus settings are processed in a single self-calibration bundle adjustment. In the self-calibration bundle adjustment, three types of unknowns, namely, exterior orientation parameters, unknown space point coordinates, and model coefficients of the intrinsic parameters, are solved simultaneously. Experiments on three different digital cameras with zoom lenses support the feasibility of the proposed method, and their relative accuracies range from 1:4000 to 1:15,100. Furthermore, the nominal focal length written in the exchangeable image file header is found to lack reliability in experiments. Thereafter, the joint influence of zoom lens instability and zoom recording errors is further analyzed quantitatively. The analysis result is consistent with the experimental result and explains the reason why zoom lens calibration can never have the same accuracy as monofocal self-calibration.

  5. Determination of Phobos' rotational parameters by an inertial frame bundle block adjustment

    Science.gov (United States)

    Burmeister, Steffi; Willner, Konrad; Schmidt, Valentina; Oberst, Jürgen

    2018-01-01

    A functional model for a bundle block adjustment in the inertial reference frame was developed, implemented and tested. This approach enables the determination of rotation parameters of planetary bodies on the basis of photogrammetric observations. Tests with a self-consistent synthetic data set showed that the implementation converges reliably toward the expected values of the introduced unknown parameters of the adjustment, e.g., spin pole orientation, and that it can cope with typical observational errors in the data. We applied the model to a data set of Phobos using images from the Mars Express and the Viking mission. With Phobos being in a locked rotation, we computed a forced libration amplitude of 1.14^circ ± 0.03^circ together with a control point network of 685 points.

  6. On different types of adjustment usable to calculate the parameters of the stream power law

    Science.gov (United States)

    Demoulin, Alain; Beckers, Arnaud; Bovy, Benoît

    2012-02-01

    Model parameterization through adjustment to field data is a crucial step in the modeling and the understanding of the drainage network response to tectonic or climatic perturbations. Using as a test case a data set of 18 knickpoints that materialize the migration of a 0.7-Ma-old erosion wave in the Ourthe catchment of northern Ardennes (western Europe), we explore the impact of various data fitting on the calibration of the stream power model of river incision, from which a simple knickpoint celerity equation is derived. Our results show that statistical least squares adjustments (or misfit functions) based either on the stream-wise distances between observed and modeled knickpoint positions at time t or on differences between observed and modeled time at the actual knickpoint locations yield significantly different values for the m and K parameters of the model. As there is no physical reason to prefer one of these approaches, an intermediate least-rectangles adjustment might at first glance appear as the best compromise. However, the statistics of the analysis of 200 sets of synthetic knickpoints generated in the Ourthe catchment indicate that the time-based adjustment is the most capable of getting close to the true parameter values. Moreover, this fitting method leads in all cases to an m value lower than that obtained from the classical distance adjustment (for example, 0.75 against 0.86 for the real case of the Ourthe catchment), corresponding to an increase in the non-linear character of the dependence of knickpoint celerity on discharge.

  7. Parameters and error of a theoretical model

    International Nuclear Information System (INIS)

    Moeller, P.; Nix, J.R.; Swiatecki, W.

    1986-09-01

    We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs

  8. Adjustment of Sensor Locations During Thermal Property Parameter Estimation

    Science.gov (United States)

    Milos, Frank S.; Marschall, Jochen; Rasky, Daniel J. (Technical Monitor)

    1996-01-01

    The temperature dependent thermal properties of a material may be evaluated from transient temperature histories using nonlinear parameter estimation techniques. The usual approach is to minimize the sum of the squared errors between measured and calculated temperatures at specific locations in the body. Temperature measurements are usually made with thermocouples and it is customary to take thermocouple locations as known and fixed during parameter estimation computations. In fact, thermocouple locations are never known exactly. Location errors on the order of the thermocouple wire diameter are intrinsic to most common instrumentation procedures (e.g., inserting a thermocouple into a drilled hole) and additional errors can be expected for delicate materials, difficult installations, large thermocouple beads, etc.. Thermocouple location errors are especially significant when estimating thermal properties of low diffusively materials which can sustain large temperature gradients during testing. In the present work, a parameter estimation formulation is presented which allows for the direct inclusion of thermocouple positions into the primary parameter estimation procedure. It is straightforward to set bounds on thermocouple locations which exclude non-physical locations and are consistent with installation tolerances. Furthermore, bounds may be tightened to an extent consistent with any independent verification of thermocouple location, such as x-raying, and so the procedure is entirely consonant with experimental information. A mathematical outline of the procedure is given and its implementation is illustrated through numerical examples characteristic of light-weight, high-temperature ceramic insulation during transient heating. The efficacy and the errors associated with the procedure are discussed.

  9. Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model

    Science.gov (United States)

    Boone, Spencer

    2017-01-01

    This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.

  10. Player Modeling for Intelligent Difficulty Adjustment

    Science.gov (United States)

    Missura, Olana; Gärtner, Thomas

    In this paper we aim at automatically adjusting the difficulty of computer games by clustering players into different types and supervised prediction of the type from short traces of gameplay. An important ingredient of video games is to challenge players by providing them with tasks of appropriate and increasing difficulty. How this difficulty should be chosen and increase over time strongly depends on the ability, experience, perception and learning curve of each individual player. It is a subjective parameter that is very difficult to set. Wrong choices can easily lead to players stopping to play the game as they get bored (if underburdened) or frustrated (if overburdened). An ideal game should be able to adjust its difficulty dynamically governed by the player’s performance. Modern video games utilise a game-testing process to investigate among other factors the perceived difficulty for a multitude of players. In this paper, we investigate how machine learning techniques can be used for automatic difficulty adjustment. Our experiments confirm the potential of machine learning in this application.

  11. The pre-processing technique and parameter adjustment influences ...

    African Journals Online (AJOL)

    based BPSO structure for time-varying water temperature modelling. N Hambali, M.N. Taib, I.M. Yassin, M.H.F. Rahiman. Abstract. No Abstract. Keywords: identification; NARX; particle swarm optimization; distillation colum; temperature. Full Text:.

  12. Assimilation of satellite cloud data into the DAO finite volume Data Assimilation System using a parameter adjustment method

    Science.gov (United States)

    Norris, P.; da Silva, A.

    2003-04-01

    Cloud fraction and optical depth data from the International Satellite Cloud Climatology Project (ISCCP) and cloud liquid water path retrievals from the Special Sensor Microwave/Imager (SSM/I) instrument are assimilated into the NASA Data Assimilation Office finite volume Data Assimilation System (fvDAS) using a parameter adjustment method. The rationale behind this adjustment method is that there are several empirical parameters in the model cloud/radiation parameterizations (e.g., ``critical relative humidity'') that are not in-fact universal but have remaining spatial and temporal dependencies. These parameters can be slowly adjusted in space and time to improve the model's representation of cloud properties. In this study, the Slingo-type relative-humidity-based diagnostic cloud fraction of the Community Climate Model (CCM3) is generalized to a two parameter S-shaped dependence of cloud fraction on relative humidity. These two parameters are adjusted in both low and mid-high cloud bands using observed cloud fractions in these bands derived from ISCCP DX data. This procedure greatly improves the representation of cloud fraction in the model as compared with ISCCP data. It also significantly improves mid-latitude longwave cloud radiative forcing, as independently validated against Clouds and the Earth's Radiant Energy System (CERES) data, and mid-latitude column-averaged liquid water path (LWP) over ocean, as validated against TRMM Microwave Imager (TMI) data. The cloud fraction assimilation, by itself, degrades the shortwave cloud radiative forcing at the top-of-atmosphere, but this is recovered (as validated against CERES) by assimilating SSM/I LWP and ISCCP optical depth data via adjustment of the CCM3 diagnostic cloud liquid water parameterization and the dependence of the CCM3 column optical depth on layer cloud fractions. The net effect of this promising cloud data assimilation method is to improve forecast skill for cloud cover and optical depth. Other

  13. Intelligent Adjustment of Printhead Driving Waveform Parameters for 3D Electronic Printing

    Directory of Open Access Journals (Sweden)

    Lin Na

    2017-01-01

    Full Text Available In practical applications of 3D electronic printing, a major challenge is to adjust the printhead for a high print resolution and accuracy. However, an exhausting manual selective process inevitably wastes a lot of time. Therefore, in this paper, we proposed a new intelligent adjustment method, which adopts artificial bee colony algorithm to optimize the printhead driving waveform parameters for getting the desired printhead state. Experimental results show that this method can quickly and accuracy find out the suitable combination of driving waveform parameters to meet the needs of applications.

  14. The Use of Response Surface Methodology to Optimize Parameter Adjustments in CNC Machine Tools

    Directory of Open Access Journals (Sweden)

    Shao-Hsien Chen

    2014-01-01

    Full Text Available This paper mainly covers a research intended to improve the circular accuracy of CNC machine tools and the adjustment and analysis of the main controller parameters applied to improve accuracy. In this study, controller analysis software was used to detect the adjustment status of the servo parameters of the feed axis. According to the FANUC parameter manual, the parameter address, frequency, response measurements, and the one-fourth corner acceleration and deceleration measurements of the machine tools were adjusted. The experimental design (DOE was adopted in this study for taking circular measurements and engaging in the planning and selection of important parameter data. The Minitab R15 software was adopted to predict the experimental data analysis, while the seminormal probability map, Plato, and analysis of variance (ANOVA were adopted to determine the impacts of the significant parameter factors and the interactions among them. Additionally, based on the response surface map and contour plot, the optimal values were obtained. In addition, comparison and verification were conducted through the Taguchi method, regression analysis to improved machining accuracy and efficiency. The unadjusted error was 7.8 μm; through the regression analysis method, the error was 5.8 μm and through the Taguchi analysis method, the error was 6.4 μm.

  15. Effect of Flux Adjustments on Temperature Variability in Climate Models

    International Nuclear Information System (INIS)

    Duffy, P.; Bell, J.; Covey, C.; Sloan, L.

    1999-01-01

    It has been suggested that ''flux adjustments'' in climate models suppress simulated temperature variability. If true, this might invalidate the conclusion that at least some of observed temperature increases since 1860 are anthropogenic, since this conclusion is based in part on estimates of natural temperature variability derived from flux-adjusted models. We assess variability of surface air temperatures in 17 simulations of internal temperature variability submitted to the Coupled Model Intercomparison Project. By comparing variability in flux-adjusted vs. non-flux adjusted simulations, we find no evidence that flux adjustments suppress temperature variability in climate models; other, largely unknown, factors are much more important in determining simulated temperature variability. Therefore the conclusion that at least some of observed temperature increases are anthropogenic cannot be questioned on the grounds that it is based in part on results of flux-adjusted models. Also, reducing or eliminating flux adjustments would probably do little to improve simulations of temperature variability

  16. Examining the Correlation between Objective Injury Parameters, Personality Traits and Adjustment Measures among Burn Victims

    Directory of Open Access Journals (Sweden)

    Josef Mordechai Haik

    2015-03-01

    Full Text Available Background: Burn victims experience immense physical and mental hardship during their process of rehabilitation and regaining functionality. We examined different objective burn related factors as well as psychological ones, in the form of personality traits, that may affect the rehabilitation process and its outcome. Objective: To assess the influence and correlation of specific personality traits and objective injury related parameters on the adjustment of burn victims post-injury. Methods: 62 male patients admitted to our burn unit due to burn injuries were compared with 36 healthy male individuals by use of questionnaires to assess each group's psychological adjustment parameters. Multivariate and hierarchical regression analysis was conducted to identify differences between the groups. Results: A significant negative correlation was found between the objective burn injury severity (e.g. TBSA and burn depth and the adjustment of burn victims (p<0.05, p<0.001, table 3. Moreover, patients more severely injured tend to be more neurotic (p<0.001, and less extroverted and agreeable (p<0.01, table 4. Conclusions: Extroverted burn victims tend to adjust better to their post-injury life while the neurotic patients tend to have difficulties adjusting. This finding may suggest new tools for early identification of maladjustment-prone patients and therefore provide them with better psychological support in a more dedicated manner.

  17. Adjusted Empirical Likelihood Method in the Presence of Nuisance Parameters with Application to the Sharpe Ratio

    Directory of Open Access Journals (Sweden)

    Yuejiao Fu

    2018-04-01

    Full Text Available The Sharpe ratio is a widely used risk-adjusted performance measurement in economics and finance. Most of the known statistical inferential methods devoted to the Sharpe ratio are based on the assumption that the data are normally distributed. In this article, without making any distributional assumption on the data, we develop the adjusted empirical likelihood method to obtain inference for a parameter of interest in the presence of nuisance parameters. We show that the log adjusted empirical likelihood ratio statistic is asymptotically distributed as the chi-square distribution. The proposed method is applied to obtain inference for the Sharpe ratio. Simulation results illustrate that the proposed method is comparable to Jobson and Korkie’s method (1981 and outperforms the empirical likelihood method when the data are from a symmetric distribution. In addition, when the data are from a skewed distribution, the proposed method significantly outperforms all other existing methods. A real-data example is analyzed to exemplify the application of the proposed method.

  18. Methodological aspects of journaling a dynamic adjusting entry model

    Directory of Open Access Journals (Sweden)

    Vlasta Kašparovská

    2011-01-01

    Full Text Available This paper expands the discussion of the importance and function of adjusting entries for loan receivables. Discussion of the cyclical development of adjusting entries, their negative impact on the business cycle and potential solutions has intensified during the financial crisis. These discussions are still ongoing and continue to be relevant to members of the professional public, banking regulators and representatives of international accounting institutions. The objective of this paper is to evaluate a method of journaling dynamic adjusting entries under current accounting law. It also expresses the authors’ opinions on the potential for consistently implementing basic accounting principles in journaling adjusting entries for loan receivables under a dynamic model.

  19. Parameters-adjustable front-end controller in digital nuclear measurement system

    International Nuclear Information System (INIS)

    Hao Dejian; Zhang Ruanyu; Yan Yangyang; Wang Peng; Tang Changjian

    2013-01-01

    Background: One digitizer is used to implement a digital nuclear measurement for the acquisition of nuclear information. Purpose: A principle and method of a parameter-adjustable front-end controller is presented for the sake of reducing the quantitative errors while getting the maximum ENOB (effective number of bits) of ADC (analog-to-digital converter) during waveform digitizing, as well as reducing the losing counts. Methods: First of all, the quantitative relationship among the radiation count rate (n), the amplitude of input signal (V in ), the conversion scale of ADC (±V) and the amplification factor (A) was derived. Secondly, the hardware and software of the front-end controller were designed to fulfill matching the output of different detectors, adjusting the amplification linearly through the control of channel switching, and setting of digital potentiometer by CPLD (Complex Programmable Logic Device). Results: (1) Through the measurement of γ-ray of Am-241 under our digital nuclear measurement set-up with CZT detector, it was validated that the amplitude of output signal of detectors of RC feedback type could be amplified linearly with adjustable amplification by the front-end controller. (2) Through the measurement of X-ray spectrum of Fe-5.5 under our digital nuclear measurement set-up with Si-PIN detector, it was validated that the front-end controller was suitable for the switch resetting type detectors, by which high precision measurement under various count rates could be fulfilled. Conclusion: The principle and method of the parameter-adjustable front-end controller presented in this paper is correct and feasible. (authors)

  20. Adjustable Parameter-Based Distributed Fault Estimation Observer Design for Multiagent Systems With Directed Graphs.

    Science.gov (United States)

    Zhang, Ke; Jiang, Bin; Shi, Peng

    2017-02-01

    In this paper, a novel adjustable parameter (AP)-based distributed fault estimation observer (DFEO) is proposed for multiagent systems (MASs) with the directed communication topology. First, a relative output estimation error is defined based on the communication topology of MASs. Then a DFEO with AP is constructed with the purpose of improving the accuracy of fault estimation. Based on H ∞ and H 2 with pole placement, multiconstrained design is given to calculate the gain of DFEO. Finally, simulation results are presented to illustrate the feasibility and effectiveness of the proposed DFEO design with AP.

  1. ISLSCP II FASIR-adjusted NDVI Biophysical Parameter Fields, 1982-1998

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: The Fourier-Adjusted, Sensor and Solar zenith angle corrected, Interpolated, Reconstructed (FASIR) adjusted Normalized Difference Vegetation Index (NDVI)...

  2. ISLSCP II FASIR-adjusted NDVI Biophysical Parameter Fields, 1982-1998

    Data.gov (United States)

    National Aeronautics and Space Administration — The Fourier-Adjusted, Sensor and Solar zenith angle corrected, Interpolated, Reconstructed (FASIR) adjusted Normalized Difference Vegetation Index (NDVI) data set...

  3. Storm Water Management Model Climate Adjustment Tool (SWMM-CAT)

    Science.gov (United States)

    The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations. SWMM, first released in 1971, models hydrology and hydrauli...

  4. R.M. Solow Adjusted Model of Economic Growth

    Directory of Open Access Journals (Sweden)

    Ion Gh. Rosca

    2007-05-01

    Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans etc., the R.M. Solow model is part of the category which characterizes the economic growth. The paper proposes the study of the R.M. Solow adjusted model of economic growth, while the adjustment consisting in the model adaptation to the Romanian economic characteristics. The article is the first one from a three paper series dedicated to the macroeconomic modelling theme, using the R.M. Solow model, such as: “Measurement of the economic growth and extensions of the R.M. Solow adjusted model” and “Evolution scenarios at the Romanian economy level using the R.M. Solow adjusted model”. The analysis part of the model is based on the study of the equilibrium to the continuous case with some interpretations of the discreet one, by using the state diagram. The optimization problem at the economic level is also used; it is built up of a specified number of representative consumers and firms in order to reveal the interaction between these elements.

  5. CA-CFAR Adjustment Factor Correction with a priori Knowledge of the Clutter Distribution Shape Parameter

    Directory of Open Access Journals (Sweden)

    José Raúl Machado-Fernández

    2017-08-01

    Full Text Available Oceanic and coastal radars operation is affected because the targets information is received mixed with and undesired contribution called sea clutter. Specifically, the popular CA-CFAR processor is incapable of maintaining its design false alarm probability when facing clutter with statistical variations. In opposition to the classic alternative suggesting the use of a fixed adjustment factor, the authors propose a modification of the CA- CFAR scheme where the factor is constantly corrected according on the background signal statistical changes. Mathematically translated as a variation in the shape parameter of the clutter distribution, the background signal changes were simulated through the Weibull, Log-Normal and K distributions, deriving expressions which allow choosing an appropriate factor for each possible statistical state. The investigation contributes to the improvement of radar detection by suggesting the application of an adaptive scheme which assumes the clutter shape parameter is known a priori. The offered mathematical expressions are valid for three false alarm probabilities and several windows sizes, covering also a wide range of clutter conditions.

  6. Illumination-parameter adjustable and illumination-distribution visible LED helmet for low-level light therapy on brain injury

    Science.gov (United States)

    Wang, Pengbo; Gao, Yuan; Chen, Xiao; Li, Ting

    2016-03-01

    Low-level light therapy (LLLT) has been clinically applied. Recently, more and more cases are reported with positive therapeutic effect by using transcranial light emitting diodes (LEDs) illumination. Here, we developed a LLLT helmet for treating brain injuries based on LED arrays. We designed the LED arrays in circle shape and assembled them in multilayered 3D printed helmet with water-cooling module. The LED arrays can be adjust to touch the head of subjects. A control circuit was developed to drive and control the illumination of the LLLT helmet. The software portion provides the control of on and off of each LED arrays, the setup of illumination parameters, and 3D distribution of LLLT light dose in human subject according to the illumination setups. This LLLT light dose distribution was computed by a Monte Carlo model for voxelized media and the Visible Chinese Human head dataset and displayed in 3D view at the background of head anatomical structure. The performance of the whole system was fully tested. One stroke patient was recruited in the preliminary LLLT experiment and the following neuropsychological testing showed obvious improvement in memory and executive functioning. This clinical case suggested the potential of this Illumination-parameter adjustable and illuminationdistribution visible LED helmet as a reliable, noninvasive, and effective tool in treating brain injuries.

  7. Parenting Stress, Mental Health, Dyadic Adjustment: A Structural Equation Model

    Directory of Open Access Journals (Sweden)

    Luca Rollè

    2017-05-01

    Full Text Available Objective: In the 1st year of the post-partum period, parenting stress, mental health, and dyadic adjustment are important for the wellbeing of both parents and the child. However, there are few studies that analyze the relationship among these three dimensions. The aim of this study is to investigate the relationships between parenting stress, mental health (depressive and anxiety symptoms, and dyadic adjustment among first-time parents.Method: We studied 268 parents (134 couples of healthy babies. At 12 months post-partum, both parents filled out, in a counterbalanced order, the Parenting Stress Index-Short Form, the Edinburgh Post-natal Depression Scale, the State-Trait Anxiety Inventory, and the Dyadic Adjustment Scale. Structural equation modeling was used to analyze the potential mediating effects of mental health on the relationship between parenting stress and dyadic adjustment.Results: Results showed the full mediation effect of mental health between parenting stress and dyadic adjustment. A multi-group analysis further found that the paths did not differ across mothers and fathers.Discussion: The results suggest that mental health is an important dimension that mediates the relationship between parenting stress and dyadic adjustment in the transition to parenthood.

  8. Luminescence model with quantum impact parameter for low energy ions

    CERN Document Server

    Cruz-Galindo, H S; Martínez-Davalos, A; Belmont-Moreno, E; Galindo, S

    2002-01-01

    We have modified an analytical model of induced light production by energetic ions interacting in scintillating materials. The original model is based on the distribution of energy deposited by secondary electrons produced along the ion's track. The range of scattered electrons, and thus the energy distribution, depends on a classical impact parameter between the electron and the ion's track. The only adjustable parameter of the model is the quenching density rho sub q. The modification here presented, consists in proposing a quantum impact parameter that leads to a better fit of the model to the experimental data at low incident ion energies. The light output response of CsI(Tl) detectors to low energy ions (<3 MeV/A) is fitted with the modified model and comparison is made to the original model.

  9. A price adjustment process in a model of monopolistic competition

    NARCIS (Netherlands)

    Tuinstra, J.

    2004-01-01

    We consider a price adjustment process in a model of monopolistic competition. Firms have incomplete information about the demand structure. When they set a price they observe the amount they can sell at that price and they observe the slope of the true demand curve at that price. With this

  10. Robust estimation of hydrological model parameters

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-11-01

    Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.

  11. Model parameter updating using Bayesian networks

    Energy Technology Data Exchange (ETDEWEB)

    Treml, C. A. (Christine A.); Ross, Timothy J.

    2004-01-01

    This paper outlines a model parameter updating technique for a new method of model validation using a modified model reference adaptive control (MRAC) framework with Bayesian Networks (BNs). The model parameter updating within this method is generic in the sense that the model/simulation to be validated is treated as a black box. It must have updateable parameters to which its outputs are sensitive, and those outputs must have metrics that can be compared to that of the model reference, i.e., experimental data. Furthermore, no assumptions are made about the statistics of the model parameter uncertainty, only upper and lower bounds need to be specified. This method is designed for situations where a model is not intended to predict a complete point-by-point time domain description of the item/system behavior; rather, there are specific points, features, or events of interest that need to be predicted. These specific points are compared to the model reference derived from actual experimental data. The logic for updating the model parameters to match the model reference is formed via a BN. The nodes of this BN consist of updateable model input parameters and the specific output values or features of interest. Each time the model is executed, the input/output pairs are used to adapt the conditional probabilities of the BN. Each iteration further refines the inferred model parameters to produce the desired model output. After parameter updating is complete and model inputs are inferred, reliabilities for the model output are supplied. Finally, this method is applied to a simulation of a resonance control cooling system for a prototype coupled cavity linac. The results are compared to experimental data.

  12. The Influence of Common-rail Adjustment on the Parameters of a Diesel Tractor Engine

    Directory of Open Access Journals (Sweden)

    Lukáš Tunka

    2016-01-01

    Full Text Available The article deals with the issue of high-pressure indication of a diesel tractor engine Z 1727, which was fitted with a modern electronically controlled common-rail injection system. The aim of the study is to evaluate the influence of the adjustment of the fuel system – start of injection (SOI timings and the rail pressure (PRAIL – on the pressure development in the cylinder (PCYL, the heat release (HR and the combustion noise level (CNLA. Furthermore, the article examines the influence of pilot and post fuel injections on the CNLA. The experiments were conducted at constant speed (1480 rpm with four PRAILs and different SOI timings. As the results of measurements have shown, higher rail pressure causes higher pressure and a release of a larger amount of heat in the cylinder. These two parameters are the basic prerequisite for higher engine efficiency – higher power output of the engine at lower fuel consumption and decreased production of harmful emissions. Other advantages of the common-rail fuel system include the potential of dividing the main injection dose into the pilot injection and main injection, as well as the potential post injection. The measurements have further demonstrated that including a pilot injection phase significantly contributes to a decrease in combustion noise level as well as a more even, quieter operation of the engine.

  13. On parameter estimation in deformable models

    DEFF Research Database (Denmark)

    Fisker, Rune; Carstensen, Jens Michael

    1998-01-01

    Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian form...

  14. Parameter estimation for groundwater models under uncertain irrigation data

    Science.gov (United States)

    Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  15. Parameter identification in the logistic STAR model

    DEFF Research Database (Denmark)

    Ekner, Line Elvstrøm; Nejstgaard, Emil

    We propose a new and simple parametrization of the so-called speed of transition parameter of the logistic smooth transition autoregressive (LSTAR) model. The new parametrization highlights that a consequence of the well-known identification problem of the speed of transition parameter is that th......We propose a new and simple parametrization of the so-called speed of transition parameter of the logistic smooth transition autoregressive (LSTAR) model. The new parametrization highlights that a consequence of the well-known identification problem of the speed of transition parameter...

  16. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei

    2013-09-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  17. Application of lumped-parameter models

    Energy Technology Data Exchange (ETDEWEB)

    Ibsen, Lars Bo; Liingaard, M.

    2006-12-15

    This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil. Subsequently, the assembly of the dynamic stiffness matrix for the foundation is considered, and the solution for obtaining the steady state response, when using lumped-parameter models is given. (au)

  18. Modeling wind adjustment factor and midflame wind speed for Rothermel's surface fire spread model

    Science.gov (United States)

    Patricia L. Andrews

    2012-01-01

    Rothermel's surface fire spread model was developed to use a value for the wind speed that affects surface fire, called midflame wind speed. Models have been developed to adjust 20-ft wind speed to midflame wind speed for sheltered and unsheltered surface fuel. In this report, Wind Adjustment Factor (WAF) model equations are given, and the BehavePlus fire modeling...

  19. CHAMP: Changepoint Detection Using Approximate Model Parameters

    Science.gov (United States)

    2014-06-01

    form (with independent emissions or otherwise), in which parameter estimates are available via means such as maximum likelihood fit, MCMC , or sample ...counterparts, including the ability to generate a full posterior distribution over changepoint locations and offering a natural way to incorporate prior... sample consensus method. Our modifications also remove a significant restriction on model definition when detecting parameter changes within a single

  20. Exploiting intrinsic fluctuations to identify model parameters.

    Science.gov (United States)

    Zimmer, Christoph; Sahle, Sven; Pahle, Jürgen

    2015-04-01

    Parameterisation of kinetic models plays a central role in computational systems biology. Besides the lack of experimental data of high enough quality, some of the biggest challenges here are identification issues. Model parameters can be structurally non-identifiable because of functional relationships. Noise in measured data is usually considered to be a nuisance for parameter estimation. However, it turns out that intrinsic fluctuations in particle numbers can make parameters identifiable that were previously non-identifiable. The authors present a method to identify model parameters that are structurally non-identifiable in a deterministic framework. The method takes time course recordings of biochemical systems in steady state or transient state as input. Often a functional relationship between parameters presents itself by a one-dimensional manifold in parameter space containing parameter sets of optimal goodness. Although the system's behaviour cannot be distinguished on this manifold in a deterministic framework it might be distinguishable in a stochastic modelling framework. Their method exploits this by using an objective function that includes a measure for fluctuations in particle numbers. They show on three example models, immigration-death, gene expression and Epo-EpoReceptor interaction, that this resolves the non-identifiability even in the case of measurement noise with known amplitude. The method is applied to partially observed recordings of biochemical systems with measurement noise. It is simple to implement and it is usually very fast to compute. This optimisation can be realised in a classical or Bayesian fashion.

  1. Setting Parameters for Biological Models With ANIMO

    NARCIS (Netherlands)

    Schivo, Stefano; Scholma, Jetse; Karperien, Hermanus Bernardus Johannes; Post, Janine Nicole; van de Pol, Jan Cornelis; Langerak, Romanus; André, Étienne; Frehse, Goran

    2014-01-01

    ANIMO (Analysis of Networks with Interactive MOdeling) is a software for modeling biological networks, such as e.g. signaling, metabolic or gene networks. An ANIMO model is essentially the sum of a network topology and a number of interaction parameters. The topology describes the interactions

  2. On adjustment for auxiliary covariates in additive hazard models for the analysis of randomized experiments

    DEFF Research Database (Denmark)

    Vansteelandt, S.; Martinussen, Torben; Tchetgen, E. J Tchetgen

    2014-01-01

    We consider additive hazard models (Aalen, 1989) for the effect of a randomized treatment on a survival outcome, adjusting for auxiliary baseline covariates. We demonstrate that the Aalen least-squares estimator of the treatment effect parameter is asymptotically unbiased, even when the hazard...... that, in view of its robustness against model misspecification, Aalen least-squares estimation is attractive for evaluating treatment effects on a survival outcome in randomized experiments, and the primary reasons to consider baseline covariate adjustment in such settings could be interest in subgroup......'s dependence on time or on the auxiliary covariates is misspecified, and even away from the null hypothesis of no treatment effect. We furthermore show that adjustment for auxiliary baseline covariates does not change the asymptotic variance of the estimator of the effect of a randomized treatment. We conclude...

  3. Application of lumped-parameter models

    DEFF Research Database (Denmark)

    Ibsen, Lars Bo; Liingaard, Morten

    This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil (section 1.1). Subse......This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil (section 1...

  4. Setting Parameters for Biological Models With ANIMO

    Directory of Open Access Journals (Sweden)

    Stefano Schivo

    2014-03-01

    Full Text Available ANIMO (Analysis of Networks with Interactive MOdeling is a software for modeling biological networks, such as e.g. signaling, metabolic or gene networks. An ANIMO model is essentially the sum of a network topology and a number of interaction parameters. The topology describes the interactions between biological entities in form of a graph, while the parameters determine the speed of occurrence of such interactions. When a mismatch is observed between the behavior of an ANIMO model and experimental data, we want to update the model so that it explains the new data. In general, the topology of a model can be expanded with new (known or hypothetical nodes, and enables it to match experimental data. However, the unrestrained addition of new parts to a model causes two problems: models can become too complex too fast, to the point of being intractable, and too many parts marked as "hypothetical" or "not known" make a model unrealistic. Even if changing the topology is normally the easier task, these problems push us to try a better parameter fit as a first step, and resort to modifying the model topology only as a last resource. In this paper we show the support added in ANIMO to ease the task of expanding the knowledge on biological networks, concentrating in particular on the parameter settings.

  5. Parameter Estimation for Thurstone Choice Models

    Energy Technology Data Exchange (ETDEWEB)

    Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-24

    We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.

  6. The level density parameters for fermi gas model

    International Nuclear Information System (INIS)

    Zuang Youxiang; Wang Cuilan; Zhou Chunmei; Su Zongdi

    1986-01-01

    Nuclear level densities are crucial ingredient in the statistical models, for instance, in the calculations of the widths, cross sections, emitted particle spectra, etc. for various reaction channels. In this work 667 sets of more reliable and new experimental data are adopted, which include average level spacing D D , radiative capture width Γ γ 0 at neutron binding energy and cumulative level number N 0 at the low excitation energy. They are published during 1973 to 1983. Based on the parameters given by Gilbert-Cameon and Cook the physical quantities mentioned above are calculated. The calculated results have the deviation obviously from experimental values. In order to improve the fitting, the parameters in the G-C formula are adjusted and new set of level density parameters is obsained. The parameters is this work are more suitable to fit new measurements

  7. Evaluating Parameter Adjustment in the MODIS Gross Primary Production Algorithm Based on Eddy Covariance Tower Measurements

    Directory of Open Access Journals (Sweden)

    Jing Chen

    2014-04-01

    Full Text Available How well parameterization will improve gross primary production (GPP estimation using the MODerate-resolution Imaging Spectroradiometer (MODIS algorithm has been rarely investigated. We adjusted the parameters in the algorithm for 21 selected eddy-covariance flux towers which represented nine typical plant functional types (PFTs. We then compared these estimates of the MOD17A2 product, by the MODIS algorithm with default parameters in the Biome Property Look-Up Table, and by a two-leaf Farquhar model. The results indicate that optimizing the maximum light use efficiency (εmax in the algorithm would improve GPP estimation, especially for deciduous vegetation, though it could not compensate the underestimation during summer caused by the one-leaf upscaling strategy. Adding the soil water factor to the algorithm would not significantly affect performance, but it could make the adjusted εmax more robust for sites with the same PFT and among different PFTs. Even with adjusted parameters, both one-leaf and two-leaf models would not capture seasonally photosynthetic dynamics, thereby we suggest that further improvement in GPP estimaiton is required by taking into consideration seasonal variations of the key parameters and variables.

  8. Modelling and parameter estimation of dynamic systems

    CERN Document Server

    Raol, JR; Singh, J

    2004-01-01

    Parameter estimation is the process of using observations from a system to develop mathematical models that adequately represent the system dynamics. The assumed model consists of a finite set of parameters, the values of which are calculated using estimation techniques. Most of the techniques that exist are based on least-square minimization of error between the model response and actual system response. However, with the proliferation of high speed digital computers, elegant and innovative techniques like filter error method, H-infinity and Artificial Neural Networks are finding more and mor

  9. Models and parameters for environmental radiological assessments

    Energy Technology Data Exchange (ETDEWEB)

    Miller, C W [ed.

    1984-01-01

    This book presents a unified compilation of models and parameters appropriate for assessing the impact of radioactive discharges to the environment. Models examined include those developed for the prediction of atmospheric and hydrologic transport and deposition, for terrestrial and aquatic food-chain bioaccumulation, and for internal and external dosimetry. Chapters have been entered separately into the data base. (ACR)

  10. Consistent Stochastic Modelling of Meteocean Design Parameters

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Sterndorff, M. J.

    2000-01-01

    Consistent stochastic models of metocean design parameters and their directional dependencies are essential for reliability assessment of offshore structures. In this paper a stochastic model for the annual maximum values of the significant wave height, and the associated wind velocity, current...

  11. Models and parameters for environmental radiological assessments

    International Nuclear Information System (INIS)

    Miller, C.W.

    1984-01-01

    This book presents a unified compilation of models and parameters appropriate for assessing the impact of radioactive discharges to the environment. Models examined include those developed for the prediction of atmospheric and hydrologic transport and deposition, for terrestrial and aquatic food-chain bioaccumulation, and for internal and external dosimetry. Chapters have been entered separately into the data base

  12. A Developmental Sequence Model to University Adjustment of International Undergraduate Students

    OpenAIRE

    Chavoshi, Saeid; Wintre, Maxine Gallander; Dentakos, Stella; Wright, Lorna

    2017-01-01

    The current study proposes a Developmental Sequence Model to University Adjustment and uses a multifaceted measure, including academic, social and psychological adjustment, to examine factors predictive of undergraduate international student adjustment. A hierarchic regression model is carried out on the Student Adaptation to College Questionnaire to examine theoretically pertinent predictors arranged in a developmental sequence in determining adjustment outcomes. This model...

  13. Source term modelling parameters for Project-90

    International Nuclear Information System (INIS)

    Shaw, W.; Smith, G.; Worgan, K.; Hodgkinson, D.; Andersson, K.

    1992-04-01

    This document summarises the input parameters for the source term modelling within Project-90. In the first place, the parameters relate to the CALIBRE near-field code which was developed for the Swedish Nuclear Power Inspectorate's (SKI) Project-90 reference repository safety assessment exercise. An attempt has been made to give best estimate values and, where appropriate, a range which is related to variations around base cases. It should be noted that the data sets contain amendments to those considered by KBS-3. In particular, a completely new set of inventory data has been incorporated. The information given here does not constitute a complete set of parameter values for all parts of the CALIBRE code. Rather, it gives the key parameter values which are used in the constituent models within CALIBRE and the associated studies. For example, the inventory data acts as an input to the calculation of the oxidant production rates, which influence the generation of a redox front. The same data is also an initial value data set for the radionuclide migration component of CALIBRE. Similarly, the geometrical parameters of the near-field are common to both sub-models. The principal common parameters are gathered here for ease of reference and avoidance of unnecessary duplication and transcription errors. (au)

  14. Model for Adjustment of Aggregate Forecasts using Fuzzy Logic

    Directory of Open Access Journals (Sweden)

    Taracena–Sanz L. F.

    2010-07-01

    Full Text Available This research suggests a contribution in the implementation of forecasting models. The proposed model is developed with the aim to fit the projection of demand to surroundings of firms, and this is based on three considerations that cause that in many cases the forecasts of the demand are different from reality, such as: 1 one of the problems most difficult to model in the forecasts is the uncertainty related to the information available; 2 the methods traditionally used by firms for the projection of demand mainly are based on past behavior of the market (historical demand; and 3 these methods do not consider in their analysis the factors that are influencing so that the observed behaviour occurs. Therefore, the proposed model is based on the implementation of Fuzzy Logic, integrating the main variables that affect the behavior of market demand, and which are not considered in the classical statistical methods. The model was applied to a bottling of carbonated beverages, and with the adjustment of the projection of demand a more reliable forecast was obtained.

  15. Risk-adjusted models for adverse obstetric outcomes and variation in risk-adjusted outcomes across hospitals.

    Science.gov (United States)

    Bailit, Jennifer L; Grobman, William A; Rice, Madeline Murguia; Spong, Catherine Y; Wapner, Ronald J; Varner, Michael W; Thorp, John M; Leveno, Kenneth J; Caritis, Steve N; Shubert, Phillip J; Tita, Alan T; Saade, George; Sorokin, Yoram; Rouse, Dwight J; Blackwell, Sean C; Tolosa, Jorge E; Van Dorsten, J Peter

    2013-11-01

    Regulatory bodies and insurers evaluate hospital quality using obstetrical outcomes, however meaningful comparisons should take preexisting patient characteristics into account. Furthermore, if risk-adjusted outcomes are consistent within a hospital, fewer measures and resources would be needed to assess obstetrical quality. Our objective was to establish risk-adjusted models for 5 obstetric outcomes and assess hospital performance across these outcomes. We studied a cohort of 115,502 women and their neonates born in 25 hospitals in the United States from March 2008 through February 2011. Hospitals were ranked according to their unadjusted and risk-adjusted frequency of venous thromboembolism, postpartum hemorrhage, peripartum infection, severe perineal laceration, and a composite neonatal adverse outcome. Correlations between hospital risk-adjusted outcome frequencies were assessed. Venous thromboembolism occurred too infrequently (0.03%; 95% confidence interval [CI], 0.02-0.04%) for meaningful assessment. Other outcomes occurred frequently enough for assessment (postpartum hemorrhage, 2.29%; 95% CI, 2.20-2.38, peripartum infection, 5.06%; 95% CI, 4.93-5.19, severe perineal laceration at spontaneous vaginal delivery, 2.16%; 95% CI, 2.06-2.27, neonatal composite, 2.73%; 95% CI, 2.63-2.84). Although there was high concordance between unadjusted and adjusted hospital rankings, several individual hospitals had an adjusted rank that was substantially different (as much as 12 rank tiers) than their unadjusted rank. None of the correlations between hospital-adjusted outcome frequencies was significant. For example, the hospital with the lowest adjusted frequency of peripartum infection had the highest adjusted frequency of severe perineal laceration. Evaluations based on a single risk-adjusted outcome cannot be generalized to overall hospital obstetric performance. Copyright © 2013 Mosby, Inc. All rights reserved.

  16. Method for optimum determination of adjustable parameters in the boiling water reactor core simulator using operating data on flux distribution

    International Nuclear Information System (INIS)

    Kiguchi, T.; Kawai, T.

    1975-01-01

    A method has been developed to optimally and automatically determine the adjustable parameters of the boiling water reactor three-dimensional core simulator FLARE. The steepest gradient method is adopted for the optimization. The parameters are adjusted to best fit the operating data on power distribution measured by traversing in-core probes (TIP). The average error in the calculated TIP readings normalized by the core average is 0.053 at the rated power. The k-infinity correction term has also been derived theoretically to reduce the relatively large error in the calculated TIP readings near the tips of control rods, which is induced by the coarseness of mesh points. By introducing this correction, the average error decreases to 0.047. The void-quality relation is recognized as a function of coolant flow rate. The relation is estimated to fit the measured distributions of TIP reading at the partial power states

  17. Advances in Modelling, System Identification and Parameter ...

    Indian Academy of Sciences (India)

    models determined from flight test data by using parameter estimation methods find extensive use in design/modification of flight control systems, high fidelity flight simulators and evaluation of handling qualitites of aircraft and rotorcraft. R K Mehra et al present new algorithms and results for flutter tests and adaptive notching ...

  18. A lumped parameter model of plasma focus

    International Nuclear Information System (INIS)

    Gonzalez, Jose H.; Florido, Pablo C.; Bruzzone, H.; Clausse, Alejandro

    1999-01-01

    A lumped parameter model to estimate neutron emission of a plasma focus (PF) device is developed. The dynamic of the current sheet is calculated using a snowplow model, and the neutron production with the thermal fusion cross section for a deuterium filling gas. The results were contrasted as a function of the filling pressure with experimental measurements of a 3.68 KJ Mather-type PF. (author)

  19. One parameter model potential for noble metals

    International Nuclear Information System (INIS)

    Idrees, M.; Khwaja, F.A.; Razmi, M.S.K.

    1981-08-01

    A phenomenological one parameter model potential which includes s-d hybridization and core-core exchange contributions is proposed for noble metals. A number of interesting properties like liquid metal resistivities, band gaps, thermoelectric powers and ion-ion interaction potentials are calculated for Cu, Ag and Au. The results obtained are in better agreement with experiment than the ones predicted by the other model potentials in the literature. (author)

  20. Data registration without explicit correspondence for adjustment of camera orientation parameter estimation

    Science.gov (United States)

    Barsai, Gabor

    provides a path to fuse data from lidar, GIS and digital multispectral images and reconstructing the precise 3-D scene model, without human intervention, regardless of the type of data or features in the data. The data are initially registered to each other using GPS/INS initial positional values, then conjugate features are found in the datasets to refine the registration. The novelty of the research is that no conjugate points are necessary in the various datasets, and registration is performed without human intervention. The proposed system uses the original lidar and GIS data and finds edges of buildings with the help of the digital images, utilizing the exterior orientation parameters to project the lidar points onto the edge extracted image/map. These edge points are then utilized to orient and locate the datasets, in a correct position with respect to each other.

  1. Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model.

    Science.gov (United States)

    Laury, Marie L; Wang, Lee-Ping; Pande, Vijay S; Head-Gordon, Teresa; Ponder, Jay W

    2015-07-23

    A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. An automated procedure, ForceBalance, is used to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimental data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The AMOEBA14 model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures from 249 to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to experimental properties as a function of temperature, including the second virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient, and dielectric constant. The viscosity, self-diffusion constant, and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2-20 water molecules, the AMOEBA14 model yields results similar to AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model.

  2. Parameter optimization for surface flux transport models

    Science.gov (United States)

    Whitbread, T.; Yeates, A. R.; Muñoz-Jaramillo, A.; Petrie, G. J. D.

    2017-11-01

    Accurate prediction of solar activity calls for precise calibration of solar cycle models. Consequently we aim to find optimal parameters for models which describe the physical processes on the solar surface, which in turn act as proxies for what occurs in the interior and provide source terms for coronal models. We use a genetic algorithm to optimize surface flux transport models using National Solar Observatory (NSO) magnetogram data for Solar Cycle 23. This is applied to both a 1D model that inserts new magnetic flux in the form of idealized bipolar magnetic regions, and also to a 2D model that assimilates specific shapes of real active regions. The genetic algorithm searches for parameter sets (meridional flow speed and profile, supergranular diffusivity, initial magnetic field, and radial decay time) that produce the best fit between observed and simulated butterfly diagrams, weighted by a latitude-dependent error structure which reflects uncertainty in observations. Due to the easily adaptable nature of the 2D model, the optimization process is repeated for Cycles 21, 22, and 24 in order to analyse cycle-to-cycle variation of the optimal solution. We find that the ranges and optimal solutions for the various regimes are in reasonable agreement with results from the literature, both theoretical and observational. The optimal meridional flow profiles for each regime are almost entirely within observational bounds determined by magnetic feature tracking, with the 2D model being able to accommodate the mean observed profile more successfully. Differences between models appear to be important in deciding values for the diffusive and decay terms. In like fashion, differences in the behaviours of different solar cycles lead to contrasts in parameters defining the meridional flow and initial field strength.

  3. Risk adjusted receding horizon control of constrained linear parameter varying systems

    NARCIS (Netherlands)

    Sznaier, M.; Lagoa, C.; Stoorvogel, Antonie Arij; Li, X.

    2005-01-01

    In the past few years, control of Linear Parameter Varying Systems (LPV) has been the object of considerable attention, as a way of formalizing the intuitively appealing idea of gain scheduling control for nonlinear systems. However, currently available LPV techniques are both computationally

  4. Analisis Perbandingan Parameter Transformasi Antar Itrf Hasil Hitungan Kuadrat Terkecil Model Helmert 14-parameter Dengan Parameter Standar Iers

    OpenAIRE

    Fadly, Romi; Dewi, Citra

    2014-01-01

    This research aims to compare the 14 transformation parameters between ITRF from computation result using the Helmert 14-parameter models with IERS standard parameters. The transforma- tion parameters are calculated from the coordinates and velocities of ITRF05 to ITRF00 epoch 2000.00, and from ITRF08 to ITRF05 epoch 2005.00 for respectively transformation models. The transformation parameters are compared to the IERS standard parameters, then tested the signifi- cance of the d...

  5. Adjusting inkjet printhead parameters to deposit drugs into micro-sized reservoirs

    Directory of Open Access Journals (Sweden)

    Mau Robert

    2016-09-01

    Full Text Available Drug delivery systems (DDS ensure that therapeutically effective drug concentrations are delivered locally to the target site. For that reason, it is common to coat implants with a degradable polymer which contains drugs. However, the use of polymers as a drug carrier has been associated with adverse side effects. For that reason, several technologies have been developed to design polymer-free DDS. In literature it has been shown that micro-sized reservoirs can be applied as drug reservoirs. Inkjet techniques are capable of depositing drugs into these reservoirs. In this study, two different geometries of micro-sized reservoirs have been laden with a drug (ASA using a drop-on-demand inkjet printhead. Correlations between the characteristics of the drug solution, the operating parameters of the printhead and the geometric parameters of the reservoir are shown. It is indicated that wettability of the surface play a key role for drug deposition into micro-sized reservoirs.

  6. Constant-parameter capture-recapture models

    Science.gov (United States)

    Brownie, C.; Hines, J.E.; Nichols, J.D.

    1986-01-01

    Jolly (1982, Biometrics 38, 301-321) presented modifications of the Jolly-Seber model for capture-recapture data, which assume constant survival and/or capture rates. Where appropriate, because of the reduced number of parameters, these models lead to more efficient estimators than the Jolly-Seber model. The tests to compare models given by Jolly do not make complete use of the data, and we present here the appropriate modifications, and also indicate how to carry out goodness-of-fit tests which utilize individual capture history information. We also describe analogous models for the case where young and adult animals are tagged. The availability of computer programs to perform the analysis is noted, and examples are given using output from these programs.

  7. Aqueous Electrolytes: Model Parameters and Process Simulation

    DEFF Research Database (Denmark)

    Thomsen, Kaj

    This thesis deals with aqueous electrolyte mixtures. The Extended UNIQUAC model is being used to describe the excess Gibbs energy of such solutions. Extended UNIQUAC parameters for the twelve ions Na+, K+, NH4+, H+, Cl-, NO3-, SO42-, HSO4-, OH-, CO32-, HCO3-, and S2O82- are estimated. A computer ...... program including a steady state process simulator for the design, simulation, and optimization of fractional crystallization processes is presented.......This thesis deals with aqueous electrolyte mixtures. The Extended UNIQUAC model is being used to describe the excess Gibbs energy of such solutions. Extended UNIQUAC parameters for the twelve ions Na+, K+, NH4+, H+, Cl-, NO3-, SO42-, HSO4-, OH-, CO32-, HCO3-, and S2O82- are estimated. A computer...

  8. Improving the transferability of hydrological model parameters under changing conditions

    Science.gov (United States)

    Huang, Yingchun; Bárdossy, András

    2014-05-01

    Hydrological models are widely utilized to describe catchment behaviors with observed hydro-meteorological data. Hydrological process may be considered as non-stationary under the changing climate and land use conditions. An applicable hydrological model should be able to capture the essential features of the target catchment and therefore be transferable to different conditions. At present, many model applications based on the stationary assumptions are not sufficient for predicting further changes or time variability. The aim of this study is to explore new model calibration methods in order to improve the transferability of model parameters. To cope with the instability of model parameters calibrated on catchments in non-stationary conditions, we investigate the idea of simultaneously calibration on streamflow records for the period with dissimilar climate characteristics. In additional, a weather based weighting function is implemented to adjust the calibration period to future trends. For regions with limited data and ungauged basins, the common calibration was applied by using information from similar catchments. Result shows the model performance and transfer quantity could be well improved via common calibration. This model calibration approach will be used to enhance regional water management and flood forecasting capabilities.

  9. Modelling tourists arrival using time varying parameter

    Science.gov (United States)

    Suciptawati, P.; Sukarsa, K. G.; Kencana, Eka N.

    2017-06-01

    The importance of tourism and its related sectors to support economic development and poverty reduction in many countries increase researchers’ attentions to study and model tourists’ arrival. This work is aimed to demonstrate time varying parameter (TVP) technique to model the arrival of Korean’s tourists to Bali. The number of Korean tourists whom visiting Bali for period January 2010 to December 2015 were used to model the number of Korean’s tourists to Bali (KOR) as dependent variable. The predictors are the exchange rate of Won to IDR (WON), the inflation rate in Korea (INFKR), and the inflation rate in Indonesia (INFID). Observing tourists visit to Bali tend to fluctuate by their nationality, then the model was built by applying TVP and its parameters were approximated using Kalman Filter algorithm. The results showed all of predictor variables (WON, INFKR, INFID) significantly affect KOR. For in-sample and out-of-sample forecast with ARIMA’s forecasted values for the predictors, TVP model gave mean absolute percentage error (MAPE) as much as 11.24 percent and 12.86 percent, respectively.

  10. Dolphins adjust species-specific frequency parameters to compensate for increasing background noise.

    Science.gov (United States)

    Papale, Elena; Gamba, Marco; Perez-Gil, Monica; Martin, Vidal Martel; Giacoma, Cristina

    2015-01-01

    An increase in ocean noise levels could interfere with acoustic communication of marine mammals. In this study we explored the effects of anthropogenic and natural noise on the acoustic properties of a dolphin communication signal, the whistle. A towed array with four elements was used to record environmental background noise and whistles of short-beaked common-, Atlantic spotted- and striped-dolphins in the Canaries archipelago. Four frequency parameters were measured from each whistle, while Sound Pressure Levels (SPL) of the background noise were measured at the central frequencies of seven one-third octave bands, from 5 to 20 kHz. Results show that dolphins increase the whistles' frequency parameters with lower variability in the presence of anthropogenic noise, and increase the end frequency of their whistles when confronted with increasing natural noise. This study provides the first evidence that the synergy among SPLs has a role in shaping the whistles' structure of these three species, with respect to both natural and anthropogenic noise.

  11. Dolphins adjust species-specific frequency parameters to compensate for increasing background noise.

    Directory of Open Access Journals (Sweden)

    Elena Papale

    Full Text Available An increase in ocean noise levels could interfere with acoustic communication of marine mammals. In this study we explored the effects of anthropogenic and natural noise on the acoustic properties of a dolphin communication signal, the whistle. A towed array with four elements was used to record environmental background noise and whistles of short-beaked common-, Atlantic spotted- and striped-dolphins in the Canaries archipelago. Four frequency parameters were measured from each whistle, while Sound Pressure Levels (SPL of the background noise were measured at the central frequencies of seven one-third octave bands, from 5 to 20 kHz. Results show that dolphins increase the whistles' frequency parameters with lower variability in the presence of anthropogenic noise, and increase the end frequency of their whistles when confronted with increasing natural noise. This study provides the first evidence that the synergy among SPLs has a role in shaping the whistles' structure of these three species, with respect to both natural and anthropogenic noise.

  12. CPU time optimization and precise adjustment of the Geant4 physics parameters for a VARIAN 2100 C/D gamma radiotherapy linear accelerator simulation using GAMOS

    Science.gov (United States)

    Arce, Pedro; Lagares, Juan Ignacio

    2018-02-01

    We have verified the GAMOS/Geant4 simulation model of a 6 MV VARIAN Clinac 2100 C/D linear accelerator by the procedure of adjusting the initial beam parameters to fit the percentage depth dose and cross-profile dose experimental data at different depths in a water phantom. Thanks to the use of a wide range of field sizes, from 2  ×  2 cm2 to 40  ×  40 cm2, a small phantom voxel size and high statistics, fine precision in the determination of the beam parameters has been achieved. This precision has allowed us to make a thorough study of the different physics models and parameters that Geant4 offers. The three Geant4 electromagnetic physics sets of models, i.e. Standard, Livermore and Penelope, have been compared to the experiment, testing the four different models of angular bremsstrahlung distributions as well as the three available multiple-scattering models, and optimizing the most relevant Geant4 electromagnetic physics parameters. Before the fitting, a comprehensive CPU time optimization has been done, using several of the Geant4 efficiency improvement techniques plus a few more developed in GAMOS.

  13. CPU time optimization and precise adjustment of the Geant4 physics parameters for a VARIAN 2100 C/D gamma radiotherapy linear accelerator simulation using GAMOS.

    Science.gov (United States)

    Arce, Pedro; Lagares, Juan Ignacio

    2018-01-25

    We have verified the GAMOS/Geant4 simulation model of a 6 MV VARIAN Clinac 2100 C/D linear accelerator by the procedure of adjusting the initial beam parameters to fit the percentage depth dose and cross-profile dose experimental data at different depths in a water phantom. Thanks to the use of a wide range of field sizes, from 2  ×  2 cm 2 to 40  ×  40 cm 2 , a small phantom voxel size and high statistics, fine precision in the determination of the beam parameters has been achieved. This precision has allowed us to make a thorough study of the different physics models and parameters that Geant4 offers. The three Geant4 electromagnetic physics sets of models, i.e. Standard, Livermore and Penelope, have been compared to the experiment, testing the four different models of angular bremsstrahlung distributions as well as the three available multiple-scattering models, and optimizing the most relevant Geant4 electromagnetic physics parameters. Before the fitting, a comprehensive CPU time optimization has been done, using several of the Geant4 efficiency improvement techniques plus a few more developed in GAMOS.

  14. Using Green's Functions to initialize and adjust a global, eddying ocean biogeochemistry general circulation model

    Science.gov (United States)

    Brix, H.; Menemenlis, D.; Hill, C.; Dutkiewicz, S.; Jahn, O.; Wang, D.; Bowman, K.; Zhang, H.

    2015-11-01

    The NASA Carbon Monitoring System (CMS) Flux Project aims to attribute changes in the atmospheric accumulation of carbon dioxide to spatially resolved fluxes by utilizing the full suite of NASA data, models, and assimilation capabilities. For the oceanic part of this project, we introduce ECCO2-Darwin, a new ocean biogeochemistry general circulation model based on combining the following pre-existing components: (i) a full-depth, eddying, global-ocean configuration of the Massachusetts Institute of Technology general circulation model (MITgcm), (ii) an adjoint-method-based estimate of ocean circulation from the Estimating the Circulation and Climate of the Ocean, Phase II (ECCO2) project, (iii) the MIT ecosystem model "Darwin", and (iv) a marine carbon chemistry model. Air-sea gas exchange coefficients and initial conditions of dissolved inorganic carbon, alkalinity, and oxygen are adjusted using a Green's Functions approach in order to optimize modeled air-sea CO2 fluxes. Data constraints include observations of carbon dioxide partial pressure (pCO2) for 2009-2010, global air-sea CO2 flux estimates, and the seasonal cycle of the Takahashi et al. (2009) Atlas. The model sensitivity experiments (or Green's Functions) include simulations that start from different initial conditions as well as experiments that perturb air-sea gas exchange parameters and the ratio of particulate inorganic to organic carbon. The Green's Functions approach yields a linear combination of these sensitivity experiments that minimizes model-data differences. The resulting initial conditions and gas exchange coefficients are then used to integrate the ECCO2-Darwin model forward. Despite the small number (six) of control parameters, the adjusted simulation is significantly closer to the data constraints (37% cost function reduction, i.e., reduction in the model-data difference, relative to the baseline simulation) and to independent observations (e.g., alkalinity). The adjusted air-sea gas

  15. Lumped Parameters Model of a Crescent Pump

    Directory of Open Access Journals (Sweden)

    Massimo Rundo

    2016-10-01

    Full Text Available This paper presents the lumped parameters model of an internal gear crescent pump with relief valve, able to estimate the steady-state flow-pressure characteristic and the pressure ripple. The approach is based on the identification of three variable control volumes regardless of the number of gear teeth. The model has been implemented in the commercial environment LMS Amesim with the development of customized components. Specific attention has been paid to the leakage passageways, some of them affected by the deformation of the cover plate under the action of the delivery pressure. The paper reports the finite element method analysis of the cover for the evaluation of the deflection and the validation through a contactless displacement transducer. Another aspect described in this study is represented by the computational fluid dynamics analysis of the relief valve, whose results have been used for tuning the lumped parameters model. Finally, the validation of the entire model of the pump is presented in terms of steady-state flow rate and of pressure oscillations.

  16. Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework.

    Science.gov (United States)

    El-Assady, Mennatallah; Sevastjanova, Rita; Sperrle, Fabian; Keim, Daniel; Collins, Christopher

    2018-01-01

    Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.

  17. Adjusted adaptive Lasso for covariate model-building in nonlinear mixed-effect pharmacokinetic models.

    Science.gov (United States)

    Haem, Elham; Harling, Kajsa; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Karlsson, Mats O

    2017-02-01

    One important aim in population pharmacokinetics (PK) and pharmacodynamics is identification and quantification of the relationships between the parameters and covariates. Lasso has been suggested as a technique for simultaneous estimation and covariate selection. In linear regression, it has been shown that Lasso possesses no oracle properties, which means it asymptotically performs as though the true underlying model was given in advance. Adaptive Lasso (ALasso) with appropriate initial weights is claimed to possess oracle properties; however, it can lead to poor predictive performance when there is multicollinearity between covariates. This simulation study implemented a new version of ALasso, called adjusted ALasso (AALasso), to take into account the ratio of the standard error of the maximum likelihood (ML) estimator to the ML coefficient as the initial weight in ALasso to deal with multicollinearity in non-linear mixed-effect models. The performance of AALasso was compared with that of ALasso and Lasso. PK data was simulated in four set-ups from a one-compartment bolus input model. Covariates were created by sampling from a multivariate standard normal distribution with no, low (0.2), moderate (0.5) or high (0.7) correlation. The true covariates influenced only clearance at different magnitudes. AALasso, ALasso and Lasso were compared in terms of mean absolute prediction error and error of the estimated covariate coefficient. The results show that AALasso performed better in small data sets, even in those in which a high correlation existed between covariates. This makes AALasso a promising method for covariate selection in nonlinear mixed-effect models.

  18. A Developmental Sequence Model to University Adjustment of International Undergraduate Students

    Science.gov (United States)

    Chavoshi, Saeid; Wintre, Maxine Gallander; Dentakos, Stella; Wright, Lorna

    2017-01-01

    The current study proposes a Developmental Sequence Model to University Adjustment and uses a multifaceted measure, including academic, social and psychological adjustment, to examine factors predictive of undergraduate international student adjustment. A hierarchic regression model is carried out on the Student Adaptation to College Questionnaire…

  19. Disaster Hits Home: A Model of Displaced Family Adjustment after Hurricane Katrina

    Science.gov (United States)

    Peek, Lori; Morrissey, Bridget; Marlatt, Holly

    2011-01-01

    The authors explored individual and family adjustment processes among parents (n = 30) and children (n = 55) who were displaced to Colorado after Hurricane Katrina. Drawing on in-depth interviews with 23 families, this article offers an inductive model of displaced family adjustment. Four stages of family adjustment are presented in the model: (a)…

  20. Laser-plasma SXR/EUV sources: adjustment of radiation parameters for specific applications

    Science.gov (United States)

    Bartnik, A.; Fiedorowicz, H.; Fok, T.; Jarocki, R.; Kostecki, J.; Szczurek, A.; Szczurek, M.; Wachulak, P.; Wegrzyński, Ł.

    2014-12-01

    In this work soft X-ray (SXR) and extreme ultraviolet (EUV) laser-produced plasma (LPP) sources employing Nd:YAG laser systems of different parameters are presented. First of them is a 10-Hz EUV source, based on a double-stream gaspuff target, irradiated with the 3-ns/0.8J laser pulse. In the second one a 10 ns/10 J/10 Hz laser system is employed and the third one utilizes the laser system with the pulse shorten to approximately 1 ns. Using various gases in the gas puff targets it is possible to obtain intense radiation in different wavelength ranges. This way intense continuous radiation in a wide spectral range as well as quasi-monochromatic radiation was produced. To obtain high EUV or SXR fluence the radiation was focused using three types of grazing incidence collectors and a multilayer Mo/Si collector. First of them is a multfoil gold plated collector consisted of two orthogonal stacks of ellipsoidal mirrors forming a double-focusing device. The second one is the ellipsoidal collector being part of the axisymmetrical ellipsoidal surface. Third of the collectors is composed of two aligned axisymmetrical paraboloidal mirrors optimized for focusing of SXR radiation. The last collector is an off-axis ellipsoidal multilayer Mo/Si mirror allowing for efficient focusing of the radiation in the spectral region centered at λ = 13.5 ± 0.5 nm. In this paper spectra of unaltered EUV or SXR radiation produced in different LPP source configurations together with spectra and fluence values of focused radiation are presented. Specific configurations of the sources were assigned to various applications.

  1. Adjustment of treatment parameters for photodynamic therapy of cervical pre-cancer and cancer

    Directory of Open Access Journals (Sweden)

    I. P. Aminodova

    2015-01-01

    Full Text Available Comprehensive study for optimization of parameters of photodynamic action with fotoditazin in patients with tumor and pre-tumor cervical diseases was conducted. The study included 52 female patients: pre-invasive cervical diseases were diagnosed in 34 (CIN I – in 9, CIN II – in 13, CIN III – in 12, cervical cancer – in 11 (8 had squamous cell cancer, 3 – adenocarcinoma of cervical canal, chronic cervicitis – in 7. The study agent in the form of 0,5% gel was applied on cervix in dose of 1 ml. To detect optimal interval between gel application and conduction of photodynamic therapy dynamics of accumulation and elimination of photosensitizer by means of its fl uorescence was studied. Fotoditazin was shown to have good accumulation in pathological tissues. The maximal agent accumulation was noticed in 30 min, continued about 15 min, and then gradually decreased. Maximal fl uorescence of photosensitizer was observed in foci of malignant tumor and severe intraepithelial neoplasia. To detect optimal light dose for irradiation cytological study of cell smear from specimen after light exposure with different light doses was performed. The minimal light dose necessary for activation of photochemical reaction pathway was 100 J/cm2, and optimal – 250 J/cm2. This dose allowed to destroy all atypical cells in the area of light exposure after application of gel fotoditazin. According to obtain data we suppose that the most effi cient regimen of photodynamic therapy with local application of fotoditazin-gel for treating dysplasia and pre-invasive cervical cancer was a dose of laser irradiation of 250 J/cm2 with duration of application of 30–45 min. 

  2. Modeling of Parameters of Subcritical Assembly SAD

    CERN Document Server

    Petrochenkov, S; Puzynin, I

    2005-01-01

    The accepted conceptual design of the experimental Subcritical Assembly in Dubna (SAD) is based on the MOX core with a nominal unit capacity of 25 kW (thermal). This corresponds to the multiplication coefficient $k_{\\rm eff} =0.95$ and accelerator beam power 1 kW. A subcritical assembly driven with the existing 660 MeV proton accelerator at the Joint Institute for Nuclear Research has been modelled in order to make choice of the optimal parameters for the future experiments. The Monte Carlo method was used to simulate neutron spectra, energy deposition and doses calculations. Some of the calculation results are presented in the paper.

  3. Parameter estimation in fractional diffusion models

    CERN Document Server

    Kubilius, Kęstutis; Ralchenko, Kostiantyn

    2017-01-01

    This book is devoted to parameter estimation in diffusion models involving fractional Brownian motion and related processes. For many years now, standard Brownian motion has been (and still remains) a popular model of randomness used to investigate processes in the natural sciences, financial markets, and the economy. The substantial limitation in the use of stochastic diffusion models with Brownian motion is due to the fact that the motion has independent increments, and, therefore, the random noise it generates is “white,” i.e., uncorrelated. However, many processes in the natural sciences, computer networks and financial markets have long-term or short-term dependences, i.e., the correlations of random noise in these processes are non-zero, and slowly or rapidly decrease with time. In particular, models of financial markets demonstrate various kinds of memory and usually this memory is modeled by fractional Brownian diffusion. Therefore, the book constructs diffusion models with memory and provides s...

  4. Moose models with vanishing S parameter

    International Nuclear Information System (INIS)

    Casalbuoni, R.; De Curtis, S.; Dominici, D.

    2004-01-01

    In the linear moose framework, which naturally emerges in deconstruction models, we show that there is a unique solution for the vanishing of the S parameter at the lowest order in the weak interactions. We consider an effective gauge theory based on K SU(2) gauge groups, K+1 chiral fields, and electroweak groups SU(2) L and U(1) Y at the ends of the chain of the moose. S vanishes when a link in the moose chain is cut. As a consequence one has to introduce a dynamical nonlocal field connecting the two ends of the moose. Then the model acquires an additional custodial symmetry which protects this result. We examine also the possibility of a strong suppression of S through an exponential behavior of the link couplings as suggested by the Randall Sundrum metric

  5. Nuclear Fission: from more phenomenology and adjusted parameters to more fundamental theory and increased predictive power

    Science.gov (United States)

    Bulgac, Aurel; Jin, Shi; Magierski, Piotr; Roche, Kenneth; Schunck, Nicolas; Stetcu, Ionel

    2017-11-01

    Two major recent developments in theory and computational resources created the favorable conditions for achieving a microscopic description of fission dynamics in classically allowed regions of the collective potential energy surface, almost eighty years after its discovery in 1939 by Hahn and Strassmann [1]. The first major development was in theory, the extension of the Time-Dependent Density Functional Theory (TDDFT) [2-5] to superfluid fermion systems [6]. The second development was in computing, the emergence of powerful enough supercomputers capable of solving the complex systems of equations describing the time evolution in three dimensions without any restrictions of hundreds of strongly interacting nucleons. Thus the conditions have been created to renounce phenomenological models and incomplete microscopic treatments with uncontrollable approximations and/or assumptions in the description of the complex dynamics of fission. Even though the available nuclear energy density functionals (NEDFs) are phenomenological still, their accuracy is improving steadily and the prospects of being able to perform calculations of the nuclear fission dynamics and to predict many properties of the fission fragments, otherwise not possible to extract from experiments.

  6. Adjustment costs in a two-capital growth model

    Czech Academy of Sciences Publication Activity Database

    Duczynski, Petr

    2002-01-01

    Roč. 26, č. 5 (2002), s. 837-850 ISSN 0165-1889 R&D Projects: GA AV ČR KSK9058117 Institutional research plan: CEZ:AV0Z7085904 Keywords : adjustment costs * capital mobility * convergence * human capital Subject RIV: AH - Economics Impact factor: 0.738, year: 2002

  7. Models for setting ATM parameter values

    DEFF Research Database (Denmark)

    Blaabjerg, Søren; Gravey, A.; Romæuf, L.

    1996-01-01

    presents approximate methods and discusses their applicability. We then discuss the problem of obtaining traffic characteristic values for a connection that has crossed a series of switching nodes. This problem is particularly relevant for the traffic contract components corresponding to ICIs...... (CDV) tolerance(s). The values taken by these traffic parameters characterize the so-called ''Worst Case Traffic'' that is used by CAC procedures for accepting a new connection and allocating resources to it. Conformance to the negotiated traffic characteristics is defined, at the ingress User...... essential to set traffic characteristic values that are relevant to the considered cell stream, and that ensure that the amount of non-conforming traffic is small. Using a queueing model representation for the GCRA formalism, several methods are available for choosing the traffic characteristics. This paper...

  8. Genetic Parameters of Pre-adjusted Body Weight Growth and Ultrasound Measures of Body Tissue Development in Three Seedstock Pig Breed Populations in Korea

    Directory of Open Access Journals (Sweden)

    Yun Ho Choy

    2015-12-01

    Full Text Available The objective of this study was to compare the effects of body weight growth adjustment methods on genetic parameters of body growth and tissue among three pig breeds. Data collected on 101,820 Landrace, 281,411 Yorkshire, and 78,068 Duroc pigs, born in Korean swine breeder farms since 2000, were analyzed. Records included body weights on test day and amplitude (A-mode ultrasound carcass measures of backfat thickness (BF, eye muscle area (EMA, and retail cut percentage (RCP. Days to 90 kg body weight (DAYS90, through an adjustment of the age based on the body weight at the test day, were obtained. Ultrasound measures were also pre-adjusted (ABF, EMA, AEMA, ARCP based on their test day measures. The (covariance components were obtained with 3 multi-trait animal models using the REMLF90 software package. Model I included DAYS90 and ultrasound traits, whereas model II and III accounted DAYS90 and pre-adjusted ultrasound traits. Fixed factors were sex (sex and contemporary groups (herd-year-month of birth for all traits among the models. Additionally, model I and II considered a linear covariate of final weight on the ultrasound measure traits. Heritability (h2 estimates for DAYS90, BF, EMA, and RCP ranged from 0.36 to 0.42, 0.34 to 0.43, 0.20 to 0.22, and 0.39 to 0.45, respectively, among the models. The h2 estimates of DAYS90 from model II and III were also somewhat similar. The h2 for ABF, AEMA, and ARCP were 0.35 to 0.44, 0.20 to 0.25, and 0.41 to 0.46, respectively. Our heritability estimates varied mostly among the breeds. The genetic correlations (rG were moderately negative between DAYS90 and BF (−0.29 to −0.38, and between DAYS90 and EMA (−0.16 to −0.26. BF had strong rG with RCP (−0.87 to −0.93. Moderately positive rG existed between DAYS90 and RCP (0.20 to 0.28 and between EMA and RCP (0.35 to 0.44 among the breeds. For DAYS90, model II and III, its correlations with ABF, AEMA, and ARCP were mostly low or negligible except the

  9. Genetic Parameters of Pre-adjusted Body Weight Growth and Ultrasound Measures of Body Tissue Development in Three Seedstock Pig Breed Populations in Korea.

    Science.gov (United States)

    Choy, Yun Ho; Mahboob, Alam; Cho, Chung Il; Choi, Jae Gwan; Choi, Im Soo; Choi, Tae Jeong; Cho, Kwang Hyun; Park, Byoung Ho

    2015-12-01

    The objective of this study was to compare the effects of body weight growth adjustment methods on genetic parameters of body growth and tissue among three pig breeds. Data collected on 101,820 Landrace, 281,411 Yorkshire, and 78,068 Duroc pigs, born in Korean swine breeder farms since 2000, were analyzed. Records included body weights on test day and amplitude (A)-mode ultrasound carcass measures of backfat thickness (BF), eye muscle area (EMA), and retail cut percentage (RCP). Days to 90 kg body weight (DAYS90), through an adjustment of the age based on the body weight at the test day, were obtained. Ultrasound measures were also pre-adjusted (ABF, EMA, AEMA, ARCP) based on their test day measures. The (co)variance components were obtained with 3 multi-trait animal models using the REMLF90 software package. Model I included DAYS90 and ultrasound traits, whereas model II and III accounted DAYS90 and pre-adjusted ultrasound traits. Fixed factors were sex (sex) and contemporary groups (herd-year-month of birth) for all traits among the models. Additionally, model I and II considered a linear covariate of final weight on the ultrasound measure traits. Heritability (h(2)) estimates for DAYS90, BF, EMA, and RCP ranged from 0.36 to 0.42, 0.34 to 0.43, 0.20 to 0.22, and 0.39 to 0.45, respectively, among the models. The h(2) estimates of DAYS90 from model II and III were also somewhat similar. The h(2) for ABF, AEMA, and ARCP were 0.35 to 0.44, 0.20 to 0.25, and 0.41 to 0.46, respectively. Our heritability estimates varied mostly among the breeds. The genetic correlations (rG) were moderately negative between DAYS90 and BF (-0.29 to -0.38), and between DAYS90 and EMA (-0.16 to -0.26). BF had strong rG with RCP (-0.87 to -0.93). Moderately positive rG existed between DAYS90 and RCP (0.20 to 0.28) and between EMA and RCP (0.35 to 0.44) among the breeds. For DAYS90, model II and III, its correlations with ABF, AEMA, and ARCP were mostly low or negligible except the r

  10. Glacial isostatic adjustment using GNSS permanent stations and GIA modelling tools

    Science.gov (United States)

    Kollo, Karin; Spada, Giorgio; Vermeer, Martin

    2013-04-01

    Glacial Isostatic Adjustment (GIA) affects the Earth's mantle in areas which were once ice covered and the process is still ongoing. In this contribution we focus on GIA processes in Fennoscandian and North American uplift regions. In this contribution we use horizontal and vertical uplift rates from Global Navigation Satellite System (GNSS) permanent stations. For Fennoscandia the BIFROST dataset (Lidberg, 2010) and North America the dataset from Sella, 2007 were used respectively. We perform GIA modelling with the SELEN program (Spada and Stocchi, 2007) and we vary ice model parameters in space in order to find ice model which suits best with uplift values obtained from GNSS time series analysis. In the GIA modelling, the ice models ICE-5G (Peltier, 2004) and the ice model denoted as ANU05 ((Fleming and Lambeck, 2004) and references therein) were used. As reference, the velocity field from GNSS permanent station time series was used for both target areas. Firstly the sensitivity to the harmonic degree was tested in order to reduce the computation time. In the test, nominal viscosity values and pre-defined lithosphere thicknesses models were used, varying maximum harmonic degree values. Main criteria for choosing the suitable harmonic degree was chi-square fit - if the error measure does not differ more than 10%, then one might use as well lower harmonic degree value. From this test, maximum harmonic degree of 72 was chosen to perform calculations, as the larger value did not significantly modify the results obtained, as well the computational time for observations was kept reasonable. Secondly the GIA computations were performed to find the model, which could fit with highest probability to the GNSS-based velocity field in the target areas. In order to find best fitting Earth viscosity parameters, different viscosity profiles for the Earth models were tested and their impact on horizontal and vertical velocity rates from GIA modelling was studied. For every

  11. Utilizing Visual Effects Software for Efficient and Flexible Isostatic Adjustment Modelling

    Science.gov (United States)

    Meldgaard, A.; Nielsen, L.; Iaffaldano, G.

    2017-12-01

    The isostatic adjustment signal generated by transient ice sheet loading is an important indicator of past ice sheet extent and the rheological constitution of the interior of the Earth. Finite element modelling has proved to be a very useful tool in these studies. We present a simple numerical model for 3D visco elastic Earth deformation and a new approach to the design of such models utilizing visual effects software designed for the film and game industry. The software package Houdini offers an assortment of optimized tools and libraries which greatly facilitate the creation of efficient numerical algorithms. In particular, we make use of Houdini's procedural work flow, the SIMD programming language VEX, Houdini's sparse matrix creation and inversion libraries, an inbuilt tetrahedralizer for grid creation, and the user interface, which facilitates effortless manipulation of 3D geometry. We mitigate many of the time consuming steps associated with the authoring of efficient algorithms from scratch while still keeping the flexibility that may be lost with the use of commercial dedicated finite element programs. We test the efficiency of the algorithm by comparing simulation times with off-the-shelf solutions from the Abaqus software package. The algorithm is tailored for the study of local isostatic adjustment patterns, in close vicinity to present ice sheet margins. In particular, we wish to examine possible causes for the considerable spatial differences in the uplift magnitude which are apparent from field observations in these areas. Such features, with spatial scales of tens of kilometres, are not resolvable with current global isostatic adjustment models, and may require the inclusion of local topographic features. We use the presented algorithm to study a near field area where field observations are abundant, namely, Disko Bay in West Greenland with the intention of constraining Earth parameters and ice thickness. In addition, we assess how local

  12. Dengue human infection model performance parameters.

    Science.gov (United States)

    Endy, Timothy P

    2014-06-15

    Dengue is a global health problem and of concern to travelers and deploying military personnel with development and licensure of an effective tetravalent dengue vaccine a public health priority. The dengue viruses (DENVs) are mosquito-borne flaviviruses transmitted by infected Aedes mosquitoes. Illness manifests across a clinical spectrum with severe disease characterized by intravascular volume depletion and hemorrhage. DENV illness results from a complex interaction of viral properties and host immune responses. Dengue vaccine development efforts are challenged by immunologic complexity, lack of an adequate animal model of disease, absence of an immune correlate of protection, and only partially informative immunogenicity assays. A dengue human infection model (DHIM) will be an essential tool in developing potential dengue vaccines or antivirals. The potential performance parameters needed for a DHIM to support vaccine or antiviral candidates are discussed. © The Author 2014. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Dimensionality reduction of RKHS model parameters.

    Science.gov (United States)

    Taouali, Okba; Elaissi, Ilyes; Messaoud, Hassani

    2015-07-01

    This paper proposes a new method to reduce the parameter number of models developed in the Reproducing Kernel Hilbert Space (RKHS). In fact, this number is equal to the number of observations used in the learning phase which is assumed to be high. The proposed method entitled Reduced Kernel Partial Least Square (RKPLS) consists on approximating the retained latent components determined using the Kernel Partial Least Square (KPLS) method by their closest observation vectors. The paper proposes the design and the comparative study of the proposed RKPLS method and the Support Vector Machines on Regression (SVR) technique. The proposed method is applied to identify a nonlinear Process Trainer PT326 which is a physical process available in our laboratory. Moreover as a thermal process with large time response may help record easily effective observations which contribute to model identification. Compared to the SVR technique, the results from the proposed RKPLS method are satisfactory. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Delayed heart rate recovery after exercise as a risk factor of incident type 2 diabetes mellitus after adjusting for glycometabolic parameters in men.

    Science.gov (United States)

    Yu, Tae Yang; Jee, Jae Hwan; Bae, Ji Cheol; Hong, Won-Jung; Jin, Sang-Man; Kim, Jae Hyeon; Lee, Moon-Kyu

    2016-10-15

    Some studies have reported that delayed heart rate recovery (HRR) after exercise is associated with incident type 2 diabetes mellitus (T2DM). This study aimed to investigate the longitudinal association of delayed HRR following a graded exercise treadmill test (GTX) with the development of T2DM including glucose-associated parameters as an adjusting factor in healthy Korean men. Analyses including fasting plasma glucose, HOMA-IR, HOMA-β, and HbA1c as confounding factors and known confounders were performed. HRR was calculated as peak heart rate minus heart rate after a 1-min rest (HRR 1). Cox proportional hazards model was used to quantify the independent association between HRR and incident T2DM. During 9082 person-years of follow-up between 2006 and 2012, there were 180 (10.1%) incident cases of T2DM. After adjustment for age, BMI, systolic BP, diastolic BP, smoking status, peak heart rate, peak oxygen uptake, TG, LDL-C, HDL-C, fasting plasma glucose, HOMA-IR, HOMA-β, and HbA1c, the hazard ratios (HRs) [95% confidence interval (CI)] of incident T2DM comparing the second and third tertiles to the first tertile of HRR 1 were 0.867 (0.609-1.235) and 0.624 (0.426-0.915), respectively (p for trend=0.017). As a continuous variable, in the fully-adjusted model, the HR (95% CI) of incident T2DM associated with each 1 beat increase in HRR 1 was 0.980 (0.960-1.000) (p=0.048). This study demonstrated that delayed HRR after exercise predicts incident T2DM in men, even after adjusting for fasting glucose, HOMA-IR, HOMA-β, and HbA1c. However, only HRR 1 had clinical significance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. TruMicro Series 2000 sub-400 fs class industrial fiber lasers: adjustment of laser parameters to process requirements

    Science.gov (United States)

    Kanal, Florian; Kahmann, Max; Tan, Chuong; Diekamp, Holger; Jansen, Florian; Scelle, Raphael; Budnicki, Aleksander; Sutter, Dirk

    2017-02-01

    The matchless properties of ultrashort laser pulses, such as the enabling of cold processing and non-linear absorption, pave the way to numerous novel applications. Ultrafast lasers arrived in the last decade at a level of reliability suitable for the industrial environment.1 Within the next years many industrial manufacturing processes in several markets will be replaced by laser-based processes due to their well-known benefits: These are non-contact wear-free processing, higher process accuracy or an increase of processing speed and often improved economic efficiency compared to conventional processes. Furthermore, new processes will arise with novel sources, addressing previously unsolved challenges. One technical requirement for these exciting new applications will be to optimize the large number of available parameters to the requirements of the application. In this work we present an ultrafast laser system distinguished by its capability to combine high flexibility and real time process-inherent adjustments of the parameters with industry-ready reliability. This industry-ready reliability is ensured by a long experience in designing and building ultrashort-pulse lasers in combination with rigorous optimization of the mechanical construction, optical components and the entire laser head for continuous performance. By introducing a new generation of mechanical design in the last few years, TRUMPF enabled its ultrashort-laser platforms to fulfill the very demanding requirements for passively coupling high-energy single-mode radiation into a hollow-core transport fiber. The laser architecture presented here is based on the all fiber MOPA (master oscillator power amplifier) CPA (chirped pulse amplification) technology. The pulses are generated in a high repetition rate mode-locked fiber oscillator also enabling flexible pulse bursts (groups of multiple pulses) with 20 ns intra-burst pulse separation. An external acousto-optic modulator (XAOM) enables linearization

  16. Clustering reveals limits of parameter identifiability in multi-parameter models of biochemical dynamics.

    Science.gov (United States)

    Nienałtowski, Karol; Włodarczyk, Michał; Lipniacki, Tomasz; Komorowski, Michał

    2015-09-29

    Compared to engineering or physics problems, dynamical models in quantitative biology typically depend on a relatively large number of parameters. Progress in developing mathematics to manipulate such multi-parameter models and so enable their efficient interplay with experiments has been slow. Existing solutions are significantly limited by model size. In order to simplify analysis of multi-parameter models a method for clustering of model parameters is proposed. It is based on a derived statistically meaningful measure of similarity between groups of parameters. The measure quantifies to what extend changes in values of some parameters can be compensated by changes in values of other parameters. The proposed methodology provides a natural mathematical language to precisely communicate and visualise effects resulting from compensatory changes in values of parameters. As a results, a relevant insight into identifiability analysis and experimental planning can be obtained. Analysis of NF-κB and MAPK pathway models shows that highly compensative parameters constitute clusters consistent with the network topology. The method applied to examine an exceptionally rich set of published experiments on the NF-κB dynamics reveals that the experiments jointly ensure identifiability of only 60% of model parameters. The method indicates which further experiments should be performed in order to increase the number of identifiable parameters. We currently lack methods that simplify broadly understood analysis of multi-parameter models. The introduced tools depict mutually compensative effects between parameters to provide insight regarding role of individual parameters, identifiability and experimental design. The method can also find applications in related methodological areas of model simplification and parameters estimation.

  17. Modeling of an Adjustable Beam Solid State Light

    Data.gov (United States)

    National Aeronautics and Space Administration — This proposal is for the development of a computational model of a prototype variable beam light source using optical modeling software, Zemax OpticStudio ®. The...

  18. Maternal Working Models of Attachment, Marital Adjustment, and the Parent-Child Relationship.

    Science.gov (United States)

    Eiden, Rina Das; And Others

    1995-01-01

    Examined the connection between maternal working models, marital adjustment, and parent-child relationship among 45 mothers and their 16- to 62-month-old children. Found that maternal working models were related to the quality of mother-child interactions and child security, as well as a significant relationship between marital adjustment and…

  19. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    International Nuclear Information System (INIS)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.

    2015-01-01

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available

  20. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    Science.gov (United States)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.

    2015-12-01

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  1. Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model

    Science.gov (United States)

    Custer, Michael

    2015-01-01

    This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…

  2. The impact of highway base-saturation flow rate adjustment on Kuwait's transport and environmental parameters estimation.

    Science.gov (United States)

    AlRukaibi, Fahad; AlKheder, Sharaf; Al-Rukaibi, Duaij; Al-Burait, Abdul-Aziz

    2018-03-23

    Traditional transportation systems' management and operation mainly focused on improving traffic mobility and safety without imposing any environmental concerns. Transportation and environmental issues are interrelated and affected by the same parameters especially at signalized intersections. Additionally, traffic congestion at signalized intersections has a major contribution in the environmental problem as related to vehicle emission, fuel consumption, and delay. Therefore, signalized intersections' design and operation is an important parameter to minimize the impact on the environment. The design and operation of signalized intersections are highly dependent on the base saturation flow rate (BSFR). Highway Capacity Manual (HCM) uses a base-saturation flow rate of 1900-passenger car/h/lane for areas with a population intensity greater than or equal to 250,000 and a value of 1750-passenger car/h/lane for less populated areas. The base-saturation flow rate value in HCM is derived from a field data collected in developed countries. The adopted value in Kuwait is 1800passengercar/h/lane, which is the value that used in this analysis as a basis for comparison. Due to the difference in behavior between drivers in developed countries and their fellows in Kuwait, an adjustment was made to the base-saturation flow rate to represent Kuwait's traffic and environmental conditions. The reduction in fuel consumption and vehicles' emission after modifying the base-saturation flow rate (BSFR increased by 12.45%) was about 34% on average. Direct field measurements of the saturation flow rate were used while using the air quality mobile lab to calculate emissions' rates. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Models for estimating photosynthesis parameters from in situ production profiles

    Science.gov (United States)

    Kovač, Žarko; Platt, Trevor; Sathyendranath, Shubha; Antunović, Suzana

    2017-12-01

    The rate of carbon assimilation in phytoplankton primary production models is mathematically prescribed with photosynthesis irradiance functions, which convert a light flux (energy) into a material flux (carbon). Information on this rate is contained in photosynthesis parameters: the initial slope and the assimilation number. The exactness of parameter values is crucial for precise calculation of primary production. Here we use a model of the daily production profile based on a suite of photosynthesis irradiance functions and extract photosynthesis parameters from in situ measured daily production profiles at the Hawaii Ocean Time-series station Aloha. For each function we recover parameter values, establish parameter distributions and quantify model skill. We observe that the choice of the photosynthesis irradiance function to estimate the photosynthesis parameters affects the magnitudes of parameter values as recovered from in situ profiles. We also tackle the problem of parameter exchange amongst the models and the effect it has on model performance. All models displayed little or no bias prior to parameter exchange, but significant bias following parameter exchange. The best model performance resulted from using optimal parameter values. Model formulation was extended further by accounting for spectral effects and deriving a spectral analytical solution for the daily production profile. The daily production profile was also formulated with time dependent growing biomass governed by a growth equation. The work on parameter recovery was further extended by exploring how to extract photosynthesis parameters from information on watercolumn production. It was demonstrated how to estimate parameter values based on a linearization of the full analytical solution for normalized watercolumn production and from the solution itself, without linearization. The paper complements previous works on photosynthesis irradiance models by analysing the skill and consistency of

  4. Adjusting multistate capture-recapture models for misclassification bias: manatee breeding proportions

    Science.gov (United States)

    Kendall, W.L.; Hines, J.E.; Nichols, J.D.

    2003-01-01

    Matrix population models are important tools for research and management of populations. Estimating the parameters of these models is an important step in applying them to real populations. Multistate capture-recapture methods have provided a useful means for estimating survival and parameters of transition between locations or life history states but have mostly relied on the assumption that the state occupied by each detected animal is known with certainty. Nevertheless, in some cases animals can be misclassified. Using multiple capture sessions within each period of interest, we developed a method that adjusts estimates of transition probabilities for bias due to misclassification. We applied this method to 10 years of sighting data for a population of Florida manatees (Trichechus manatus latirostris) in order to estimate the annual probability of transition from nonbreeding to breeding status. Some sighted females were unequivocally classified as breeders because they were clearly accompanied by a first-year calf. The remainder were classified, sometimes erroneously, as nonbreeders because an attendant first-year calf was not observed or was classified as more than one year old. We estimated a conditional breeding probability of 0.31 + 0.04 (estimate + 1 SE) when we ignored misclassification bias, and 0.61 + 0.09 when we accounted for misclassification.

  5. Reduction of Absorbed Dose in Storage Phosphor Urography by Significant Lowering of Tube Voltage and Adjustment of Image Display Parameters

    International Nuclear Information System (INIS)

    Wiltz, H.J.; Petersen, U.; Axelsson, B.

    2005-01-01

    Purpose: To investigate whether image quality in storage phosphor urography can be maintained when the X-ray tube voltage is significantly lowered to give a lower patient dose. Material and Methods: Initial phantom studies were used to establish exposure settings at 53 kV that gave signal-to-noise ratios for contrast media structures equivalent to those obtained at the reference kilovoltage of 69 kV. Dose area product and image quality, assessed by image quality criteria and visual grading, were then recorded for 44 patients drawn at random to be examined by either the standard or modified technique. Results: Absorbed dose could be reduced by more than 30% without any significant change in image quality in manually controlled exposures and by 3% in exposures controlled by AEC. Conclusion: It might be possible to lower the tube voltage in digital examinations involving contrast media as a means of lowering patient dose. The image display parameters need to be adjusted to maintain image quality

  6. A 16.3 pJ/pulse low-complexity and energy-efficient transmitter with adjustable pulse parameters

    International Nuclear Information System (INIS)

    Jiang Jun; Zhao Yi; Shao Ke; Chen Hu; Xia Lingli; Hong Zhiliang

    2011-01-01

    This paper presents a novel, fully integrated transmitter for 3-5 GHz pulsed UWB. The BPSK modulation transmitter has been implemented in SMIC CMOS 0.13 μm technology with a 1.2-V supply voltage and a die size of 0.8 x 0.95 mm 2 . This transmitter is based on the impulse response filter method, which uses a tunable R paralleled with a LC frequency selection network to realize continuously adjustable pulse parameters, including bandwidth, width and amplitude. Due to the extremely low duty of the pulsed UWB, a proposed output buffer is employed to save power consumption significantly. Finally, measurement results show that the transmitter consumes only 16.3 pJ/pulse to achieve a pulse repetition rate of 100 Mb/s. Generated pulses strictly comply with the FCC spectral mask. The continuously variable pulse width is from 900 to 1.5 ns and the amplitude with the minimum 178 mVpp and the maximum 432 mVpp can be achieved. (semiconductor integrated circuits)

  7. Comprehensive Study of Z-Cut Highly Integrated LiNbO3 Optical Modulator with Adjustable Chirp Parameters

    Science.gov (United States)

    Palodiya, Vikram; Raghuwanshi, Sanjeev Kumar

    2017-12-01

    In this paper, the domain inversion is used in a simple fashion to improve the performance of a Z-cut highly integrated LiNbO3 optical modulator (LNOM). The Z-cut modulator having ≤ 3 V switching voltage and bandwidth of 15 GHz for an external modulator in which traveling-wave electrode length L_{m} imposed the modulating voltage, the product of V_π and L_{m} is fixed for a given electro-optic material (EOM). An investigation to achieve a low V_π by both magnitude of the electro-optic coefficient (EOC) for a wide variety of EOMs has been reported. The Sellmeier equation (SE) for the extraordinary index of congruent LiNbO3 is derived. The predictions related to phase matching are accurate between room temperature and 250 °C and wavelength ranging from 0.4 to 5 μm. The SE predicts more accurate refractive indices (RI) at long wavelengths. The different overlaps between the waveguides for the Z-cut structure are shown to yield a chirp parameter that can able to adjust 0-0.7. Theoretical results are perfectly verified by simulated results.

  8. Parameter Estimation of a Plucked String Synthesis Model Using a Genetic Algorithm with Perceptual Fitness Calculation

    Directory of Open Access Journals (Sweden)

    Riionheimo Janne

    2003-01-01

    Full Text Available We describe a technique for estimating control parameters for a plucked string synthesis model using a genetic algorithm. The model has been intensively used for sound synthesis of various string instruments but the fine tuning of the parameters has been carried out with a semiautomatic method that requires some hand adjustment with human listening. An automated method for extracting the parameters from recorded tones is described in this paper. The calculation of the fitness function utilizes knowledge of the properties of human hearing.

  9. Optimizing incomplete sample designs for item response model parameters

    NARCIS (Netherlands)

    van der Linden, Willem J.

    Several models for optimizing incomplete sample designs with respect to information on the item parameters are presented. The following cases are considered: (1) known ability parameters; (2) unknown ability parameters; (3) item sets with multiple ability scales; and (4) response models with

  10. Circumplex and Spherical Models for Child School Adjustment and Competence.

    Science.gov (United States)

    Schaefer, Earl S.; Edgerton, Marianna

    The goal of this study is to broaden the scope of a conceptual model for child behavior by analyzing constructs relevant to cognition, conation, and affect. Two samples were drawn from school populations. For the first sample, 28 teachers from 8 rural, suburban, and urban schools rated 193 kindergarten children. Each teacher rated up to eight…

  11. R.M. Solow Adjusted Model of Economic Growth

    Directory of Open Access Journals (Sweden)

    Ion Gh. Rosca

    2007-05-01

    The analysis part of the model is based on the study of the equilibrium to the continuous case with some interpretations of the discreet one, by using the state diagram. The optimization problem at the economic level is also used; it is built up of a specified number of representative consumers and firms in order to reveal the interaction between these elements.

  12. Study on Parameters Modeling of Wind Turbines Using SCADA Data

    Directory of Open Access Journals (Sweden)

    Yonglong YAN

    2014-08-01

    Full Text Available Taking the advantage of the current massive monitoring data from Supervisory Control and Data Acquisition (SCADA system of wind farm, it is of important significance for anomaly detection, early warning and fault diagnosis to build the data model of state parameters of wind turbines (WTs. The operational conditions and the relationships between the state parameters of wind turbines are complex. It is difficult to establish the model of state parameter accurately, and the modeling method of state parameters of wind turbines considering parameter selection is proposed. Firstly, by analyzing the characteristic of SCADA data, a reasonable range of data and monitoring parameters are chosen. Secondly, neural network algorithm is adapted, and the selection method of input parameters in the model is presented. Generator bearing temperature and cooling air temperature are regarded as target parameters, and the two models are built and input parameters of the models are selected, respectively. Finally, the parameter selection method in this paper and the method using genetic algorithm-partial least square (GA-PLS are analyzed comparatively, and the results show that the proposed methods are correct and effective. Furthermore, the modeling of two parameters illustrate that the method in this paper can applied to other state parameters of wind turbines.

  13. The influence of pH adjustment on kinetics parameters in tapioca wastewater treatment using aerobic sequencing batch reactor system

    Science.gov (United States)

    Mulyani, Happy; Budianto, Gregorius Prima Indra; Margono, Kaavessina, Mujtahid

    2018-02-01

    The present investigation deals with the aerobic sequencing batch reactor system of tapioca wastewater treatment with varying pH influent conditions. This project was carried out to evaluate the effect of pH on kinetics parameters of system. It was done by operating aerobic sequencing batch reactor system during 8 hours in many tapioca wastewater conditions (pH 4.91, pH 7, pH 8). The Chemical Oxygen Demand (COD) and Mixed Liquor Volatile Suspended Solids (MLVSS) of the aerobic sequencing batch reactor system effluent at steady state condition were determined at interval time of two hours to generate data for substrate inhibition kinetics parameters. Values of the kinetics constants were determined using Monod and Andrews models. There was no inhibition constant (Ki) detected in all process variation of aerobic sequencing batch reactor system for tapioca wastewater treatment in this study. Furthermore, pH 8 was selected as the preferred aerobic sequencing batch reactor system condition in those ranging pH investigated due to its achievement of values of kinetics parameters such µmax = 0.010457/hour and Ks = 255.0664 mg/L COD.

  14. Study on Gas Field Optimization Distribution with Parameters Adjustment of the Air Duct Outlet for Mechanized Heading Face in Coal Mine

    Science.gov (United States)

    Gong, Xiao-Yan; Zhang, Xin-Yi; Wu, Yue; Xia, Zhi-Xin; Li, Ying

    2017-12-01

    At present, as the increasingly drilling dimensions with cross-section expansion and distance prolong in coal mine, the situation of gas accumulation in mechanized heading face becomes severe. In this paper, optimization research of gas distribution was carried out by adjusting parameters of the air duct outlet, including angle, caliber and the front and rear distance of air duct outlet. Mechanized heading face of Ningtiaota coal mine was taken as the research object, simulated and analyzed the problems of original gas field, the reasonable parameters range of the air duct outlet was determined according to the allowable range of wind speed and the effect of gas dilution, the adjustment range of each parameter of the air duct outlet is preliminarily determined. Base on this, the distribution of gas field under different parameters adjustment of air duct outlet was simulated. The specific parameters under the different distance between the air duct outlet and the mechanized heading face were obtained, and a new method of optimizing the gas distribution by adjusting parameters of the air duct outlet was provided.

  15. Droop Control with an Adjustable Complex Virtual Impedance Loop based on Cloud Model Theory

    DEFF Research Database (Denmark)

    Li, Yan; Shuai, Zhikang; Xu, Qinming

    2016-01-01

    Droop control framework with an adjustable virtual impedance loop is proposed in this paper, which is based on the cloud model theory. The proposed virtual impedance loop includes two terms: a negative virtual resistor and an adjustable virtual inductance. The negative virtual resistor term...

  16. Parameter and Uncertainty Estimation in Groundwater Modelling

    DEFF Research Database (Denmark)

    Jensen, Jacob Birk

    The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... was applied.Capture zone modelling was conducted on a synthetic stationary 3-dimensional flow problem involving river, surface and groundwater flow. Simulated capture zones were illustrated as likelihood maps and compared with a deterministic capture zones derived from a reference model. The results showed...

  17. Adjusting the specificity of an engine map based on the sensitivity of an engine control parameter relative to a performance variable

    Science.gov (United States)

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2014-10-28

    Methods and systems for engine control optimization are provided. A first and a second operating condition of a vehicle engine are detected. An initial value is identified for a first and a second engine control parameter corresponding to a combination of the detected operating conditions according to a first and a second engine map look-up table. The initial values for the engine control parameters are adjusted based on a detected engine performance variable to cause the engine performance variable to approach a target value. A first and a second sensitivity of the engine performance variable are determined in response to changes in the engine control parameters. The first engine map look-up table is adjusted when the first sensitivity is greater than a threshold, and the second engine map look-up table is adjusted when the second sensitivity is greater than a threshold.

  18. WINKLER'S SINGLE-PARAMETER SUBGRADE MODEL FROM ...

    African Journals Online (AJOL)

    Preferred Customer

    SUBGRADE MODELING. Asrat Worku. Department of ... The models give consistently larger stiffness for the Winkler springs as compared to previously proposed similar continuum-based models that ignore the lateral stresses. ...... (ν = 0.25 and E = 40MPa); (b) a medium stiff clay (ν = 0.45 and E = 50MPa). In contrast to this, ...

  19. Detectability adjusted count models of songbird abundance: Chapter 6

    Science.gov (United States)

    2011-01-01

    Sagebrush (Artemisia spp.) steppe ecosystems have experienced recent changes resulting not only in the loss of habitat but also fragmentation and degradation of remaining habitats. As a result, sagebrush-obligate and sagebrush associated songbird populations have experienced population declines over the past several decades. We examined landscape-scale responses in occupancy and abundance for six focal songbird species at 318 survey sites across the Wyoming Basins Ecoregional Assessment (WBEA) area. Occupancy and abundance models were fit for each species using datasets developed at multiple moving window extents to assess landscape-scale relationships between abiotic, habitat, and anthropogenic factors. Anthropogenic factors had less influence on species occupancy or abundance than abiotic and habitat factors. Sagebrush measures were strong predictors of occurrence for sagebrush-obligate species, such as Brewer’s sparrows (Spizella breweri), sage sparrows (Amphispiza belli) and sage thrashers (Oreoscoptes montanus), as well as green-tailed towhees (Pipilo chlorurus), a species associated with mountain shrub communities. Occurrence for lark sparrows (Chondestes grammacus) and vesper sparrows (Pooecetes gramineus), considered shrub steppe-associated species, was also related to big sagebrush communities, but at large spatial extents. Although relationships between anthropogenic variables and occurrence were weak for most species, the consistent relationship with sagebrush habitat variables suggests direct habitat loss and not edge or additional fragmentation effects are causing declines in the avifauna examined in the WBEA area. Thus, natural and anthropogenic disturbances that result in loss of critical habitats are the biggest threats to these species. We applied our models spatially across the WBEA area to identify and prioritize key areas for conservation.

  20. Re-estimating temperature-dependent consumption parameters in bioenergetics models for juvenile Chinook salmon

    Science.gov (United States)

    Plumb, John M.; Moffitt, Christine M.

    2015-01-01

    Researchers have cautioned against the borrowing of consumption and growth parameters from other species and life stages in bioenergetics growth models. In particular, the function that dictates temperature dependence in maximum consumption (Cmax) within the Wisconsin bioenergetics model for Chinook Salmon Oncorhynchus tshawytscha produces estimates that are lower than those measured in published laboratory feeding trials. We used published and unpublished data from laboratory feeding trials with subyearling Chinook Salmon from three stocks (Snake, Nechako, and Big Qualicum rivers) to estimate and adjust the model parameters for temperature dependence in Cmax. The data included growth measures in fish ranging from 1.5 to 7.2 g that were held at temperatures from 14°C to 26°C. Parameters for temperature dependence in Cmax were estimated based on relative differences in food consumption, and bootstrapping techniques were then used to estimate the error about the parameters. We found that at temperatures between 17°C and 25°C, the current parameter values did not match the observed data, indicating that Cmax should be shifted by about 4°C relative to the current implementation under the bioenergetics model. We conclude that the adjusted parameters for Cmax should produce more accurate predictions from the bioenergetics model for subyearling Chinook Salmon.

  1. Adjustment problems and maladaptive relational style: a mediational model of sexual coercion in intimate relationships.

    Science.gov (United States)

    Salwen, Jessica K; O'Leary, K Daniel

    2013-07-01

    Four hundred and fifty-three married or cohabitating couples participated in the current study. A meditational model of men's perpetration of sexual coercion within an intimate relationship was examined based on past theories and known correlates of rape and sexual coercion. The latent constructs of adjustment problems and maladaptive relational style were examined. Adjustment problem variables included perceived stress, perceived low social support, and marital discord. Maladaptive relational style variables included psychological aggression, dominance, and jealousy. Sexual coercion was a combined measure of men's reported perpetration and women's reported victimization. As hypothesized, adjustment problems significantly predicted sexual coercion. Within the meditational model, adjustment problems were significantly correlated with maladaptive relational style, and maladaptive relational style significantly predicted sexual coercion. Once maladaptive relational style was introduced as a mediator, adjustment problems no longer significantly predicted sexual coercion. Implications for treatment, limitations, and future research are discussed.

  2. Modelling decremental ramps using 2- and 3-parameter "critical power" models.

    Science.gov (United States)

    Morton, R Hugh; Billat, Veronique

    2013-01-01

    The "Critical Power" (CP) model of human bioenergetics provides a valuable way to identify both limits of tolerance to exercise and mechanisms that underpin that tolerance. It applies principally to cycling-based exercise, but with suitable adjustments for analogous units it can be applied to other exercise modalities; in particular to incremental ramp exercise. It has not yet been applied to decremental ramps which put heavy early demand on the anaerobic energy supply system. This paper details cycling-based bioenergetics of decremental ramps using 2- and 3-parameter CP models. It derives equations that, for an individual of known CP model parameters, define those combinations of starting intensity and decremental gradient which will or will not lead to exhaustion before ramping to zero; and equations that predict time to exhaustion on those decremental ramps that will. These are further detailed with suitably chosen numerical and graphical illustrations. These equations can be used for parameter estimation from collected data, or to make predictions when parameters are known.

  3. Identifying the connective strength between model parameters and performance criteria

    Directory of Open Access Journals (Sweden)

    B. Guse

    2017-11-01

    Full Text Available In hydrological models, parameters are used to represent the time-invariant characteristics of catchments and to capture different aspects of hydrological response. Hence, model parameters need to be identified based on their role in controlling the hydrological behaviour. For the identification of meaningful parameter values, multiple and complementary performance criteria are used that compare modelled and measured discharge time series. The reliability of the identification of hydrologically meaningful model parameter values depends on how distinctly a model parameter can be assigned to one of the performance criteria. To investigate this, we introduce the new concept of connective strength between model parameters and performance criteria. The connective strength assesses the intensity in the interrelationship between model parameters and performance criteria in a bijective way. In our analysis of connective strength, model simulations are carried out based on a latin hypercube sampling. Ten performance criteria including Nash–Sutcliffe efficiency (NSE, Kling–Gupta efficiency (KGE and its three components (alpha, beta and r as well as RSR (the ratio of the root mean square error to the standard deviation for different segments of the flow duration curve (FDC are calculated. With a joint analysis of two regression tree (RT approaches, we derive how a model parameter is connected to different performance criteria. At first, RTs are constructed using each performance criterion as the target variable to detect the most relevant model parameters for each performance criterion. Secondly, RTs are constructed using each parameter as the target variable to detect which performance criteria are impacted by changes in the values of one distinct model parameter. Based on this, appropriate performance criteria are identified for each model parameter. In this study, a high bijective connective strength between model parameters and performance criteria

  4. Sensitivity of predicted bioaerosol exposure from open windrow composting facilities to ADMS dispersion model parameters.

    Science.gov (United States)

    Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H

    2016-12-15

    Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.

  5. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models.

    Directory of Open Access Journals (Sweden)

    Jonathan R Karr

    2015-05-01

    Full Text Available Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.

  6. Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model

    DEFF Research Database (Denmark)

    Åberg, Andreas; Widd, Anders; Abildskov, Jens

    2016-01-01

    A challenge during the development of models for simulation of the automotive Selective Catalytic Reduction catalyst is the parameter estimation of the kinetic parameters, which can be time consuming and problematic. The parameter estimation is often carried out on small-scale reactor tests...

  7. Edge Modeling by Two Blur Parameters in Varying Contrasts.

    Science.gov (United States)

    Seo, Suyoung

    2018-06-01

    This paper presents a method of modeling edge profiles with two blur parameters, and estimating and predicting those edge parameters with varying brightness combinations and camera-to-object distances (COD). First, the validity of the edge model is proven mathematically. Then, it is proven experimentally with edges from a set of images captured for specifically designed target sheets and with edges from natural images. Estimation of the two blur parameters for each observed edge profile is performed with a brute-force method to find parameters that produce global minimum errors. Then, using the estimated blur parameters, actual blur parameters of edges with arbitrary brightness combinations are predicted using a surface interpolation method (i.e., kriging). The predicted surfaces show that the two blur parameters of the proposed edge model depend on both dark-side edge brightness and light-side edge brightness following a certain global trend. This is similar across varying CODs. The proposed edge model is compared with a one-blur parameter edge model using experiments of the root mean squared error for fitting the edge models to each observed edge profile. The comparison results suggest that the proposed edge model has superiority over the one-blur parameter edge model in most cases where edges have varying brightness combinations.

  8. Estimation of Parameters in Latent Class Models with Constraints on the Parameters.

    Science.gov (United States)

    Paulson, James A.

    This paper reviews the application of the EM Algorithm to marginal maximum likelihood estimation of parameters in the latent class model and extends the algorithm to the case where there are monotone homogeneity constraints on the item parameters. It is shown that the EM algorithm can be used to obtain marginal maximum likelihood estimates of the…

  9. Modeling phosphorus in the Lake Allatoona watershed using SWAT: I. Developing phosphorus parameter values.

    Science.gov (United States)

    Radcliffe, D E; Lin, Z; Risse, L M; Romeis, J J; Jackson, C R

    2009-01-01

    Lake Allatoona is a large reservoir north of Atlanta, GA, that drains an area of about 2870 km2 scheduled for a phosphorus (P) total maximum daily load (TMDL). The Soil and Water Assessment Tool (SWAT) model has been widely used for watershed-scale modeling of P, but there is little guidance on how to estimate P-related parameters, especially those related to in-stream P processes. In this paper, methods are demonstrated to individually estimate SWAT soil-related P parameters and to collectively estimate P parameters related to stream processes. Stream related parameters were obtained using the nutrient uptake length concept. In a manner similar to experiments conducted by stream ecologists, a small point source is simulated in a headwater sub-basin of the SWAT models, then the in-stream parameter values are adjusted collectively to get an uptake length of P similar to the values measured in the streams in the region. After adjusting the in-stream parameters, the P uptake length estimated in the simulations ranged from 53 to 149 km compared to uptake lengths measured by ecologists in the region of 11 to 85 km. Once the a priori P-related parameter set was developed, the SWAT models of main tributaries to Lake Allatoona were calibrated for daily transport. Models using SWAT P parameters derived from the methods in this paper outperformed models using default parameter values when predicting total P (TP) concentrations in streams during storm events and TP annual loads to Lake Allatoona.

  10. Utility of a single adjusting compartment: a novel methodology for whole body physiologically-based pharmacokinetic modelling

    Science.gov (United States)

    Ando, Hirotaka; Izawa, Shigeru; Hori, Wataru; Nakagawa, Ippei

    2008-01-01

    Background There are various methods for predicting human pharmacokinetics. Among these, a whole body physiologically-based pharmacokinetic (WBPBPK) model is useful because it gives a mechanistic description. However, WBPBPK models cannot predict human pharmacokinetics with enough precision. This study was conducted to elucidate the primary reason for poor predictions by WBPBPK models, and to enable better predictions to be made without reliance on complex concepts. Methods The primary reasons for poor predictions of human pharmacokinetics were investigated using a generic WBPBPK model that incorporated a single adjusting compartment (SAC), a virtual organ compartment with physiological parameters that can be adjusted arbitrarily. The blood flow rate, organ volume, and the steady state tissue-plasma partition coefficient of a SAC were calculated to fit simulated to observed pharmacokinetics in the rat. The adjusted SAC parameters were fixed and scaled up to the human using a newly developed equation. Using the scaled-up SAC parameters, human pharmacokinetics were simulated and each pharmacokinetic parameter was calculated. These simulated parameters were compared to the observed data. Simulations were performed to confirm the relationship between the precision of prediction and the number of tissue compartments, including a SAC. Results Increasing the number of tissue compartments led to an improvement of the average-fold error (AFE) of total body clearances (CLtot) and half-lives (T1/2) calculated from the simulated human blood concentrations of 14 drugs. The presence of a SAC also improved the AFE values of a ten-organ model from 6.74 to 1.56 in CLtot, and from 4.74 to 1.48 in T1/2. Moreover, the within-2-fold errors were improved in all models; incorporating a SAC gave results from 0 to 79% in CLtot, and from 14 to 93% in T1/2 of the ten-organ model. Conclusion By using a SAC in this study, we were able to show that poor prediction resulted mainly from such

  11. Incremental parameter estimation of kinetic metabolic network models

    Directory of Open Access Journals (Sweden)

    Jia Gengjie

    2012-11-01

    Full Text Available Abstract Background An efficient and reliable parameter estimation method is essential for the creation of biological models using ordinary differential equation (ODE. Most of the existing estimation methods involve finding the global minimum of data fitting residuals over the entire parameter space simultaneously. Unfortunately, the associated computational requirement often becomes prohibitively high due to the large number of parameters and the lack of complete parameter identifiability (i.e. not all parameters can be uniquely identified. Results In this work, an incremental approach was applied to the parameter estimation of ODE models from concentration time profiles. Particularly, the method was developed to address a commonly encountered circumstance in the modeling of metabolic networks, where the number of metabolic fluxes (reaction rates exceeds that of metabolites (chemical species. Here, the minimization of model residuals was performed over a subset of the parameter space that is associated with the degrees of freedom in the dynamic flux estimation from the concentration time-slopes. The efficacy of this method was demonstrated using two generalized mass action (GMA models, where the method significantly outperformed single-step estimations. In addition, an extension of the estimation method to handle missing data is also presented. Conclusions The proposed incremental estimation method is able to tackle the issue on the lack of complete parameter identifiability and to significantly reduce the computational efforts in estimating model parameters, which will facilitate kinetic modeling of genome-scale cellular metabolism in the future.

  12. On the compensation between cloud feedback and cloud adjustment in climate models

    Science.gov (United States)

    Chung, Eui-Seok; Soden, Brian J.

    2018-02-01

    Intermodel compensation between cloud feedback and rapid cloud adjustment has important implications for the range of model-inferred climate sensitivity. Although this negative intermodel correlation exists in both realistic (e.g., coupled ocean-atmosphere models) and idealized (e.g., aqua-planet) model configurations, the compensation appears to be stronger in the latter. The cause of the compensation between feedback and adjustment, and its dependence on model configuration remain poorly understood. In this study, we examine the characteristics of the cloud feedback and adjustment in model simulations with differing complexity, and analyze the causes responsible for their compensation. We show that in all model configurations, the intermodel compensation between cloud feedback and cloud adjustment largely results from offsetting changes in marine boundary-layer clouds. The greater prevalence of these cloud types in aqua-planet models is a likely contributor to the larger correlation between feedback and adjustment in those configurations. It is also shown that differing circulation changes in the aqua-planet configuration of some models act to amplify the intermodel range and sensitivity of the cloud radiative response by about a factor of 2.

  13. A simulation of water pollution model parameter estimation

    Science.gov (United States)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  14. Lumped parameter models for the interpretation of environmental tracer data

    International Nuclear Information System (INIS)

    Maloszewski, P.; Zuber, A.

    1996-01-01

    Principles of the lumped-parameter approach to the interpretation of environmental tracer data are given. The following models are considered: the piston flow model (PFM), exponential flow model (EM), linear model (LM), combined piston flow and exponential flow model (EPM), combined linear flow and piston flow model (LPM), and dispersion model (DM). The applicability of these models for the interpretation of different tracer data is discussed for a steady state flow approximation. Case studies are given to exemplify the applicability of the lumped-parameter approach. Description of a user-friendly computer program is given. (author). 68 refs, 25 figs, 4 tabs

  15. A test for the parameters of multiple linear regression models ...

    African Journals Online (AJOL)

    A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...

  16. WATGIS: A GIS-Based Lumped Parameter Water Quality Model

    Science.gov (United States)

    Glenn P. Fernandez; George M. Chescheir; R. Wayne Skaggs; Devendra M. Amatya

    2002-01-01

    A Geographic Information System (GIS)­based, lumped parameter water quality model was developed to estimate the spatial and temporal nitrogen­loading patterns for lower coastal plain watersheds in eastern North Carolina. The model uses a spatially distributed delivery ratio (DR) parameter to account for nitrogen retention or loss along a drainage network. Delivery...

  17. Exploring the interdependencies between parameters in a material model.

    Energy Technology Data Exchange (ETDEWEB)

    Silling, Stewart Andrew; Fermen-Coker, Muge

    2014-01-01

    A method is investigated to reduce the number of numerical parameters in a material model for a solid. The basis of the method is to detect interdependencies between parameters within a class of materials of interest. The method is demonstrated for a set of material property data for iron and steel using the Johnson-Cook plasticity model.

  18. Parameter identification of ZnO surge arrester models based on genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Bayadi, Abdelhafid [Laboratoire d' Automatique de Setif, Departement d' Electrotechnique, Faculte des Sciences de l' Ingenieur, Universite Ferhat ABBAS de Setif, Route de Bejaia Setif 19000 (Algeria)

    2008-07-15

    The correct and adequate modelling of ZnO surge arresters characteristics is very important for insulation coordination studies and systems reliability. In this context many researchers addressed considerable efforts to the development of surge arresters models to reproduce the dynamic characteristics observed in their behaviour when subjected to fast front impulse currents. The difficulties with these models reside essentially in the calculation and the adjustment of their parameters. This paper proposes a new technique based on genetic algorithm to obtain the best possible series of parameter values of ZnO surge arresters models. The validity of the predicted parameters is then checked by comparing the predicted results with the experimental results available in the literature. Using the ATP-EMTP package, an application of the arrester model on network system studies is presented and discussed. (author)

  19. Modeling and Control of the Redundant Parallel Adjustment Mechanism on a Deployable Antenna Panel

    Directory of Open Access Journals (Sweden)

    Lili Tian

    2016-10-01

    Full Text Available With the aim of developing multiple input and multiple output (MIMO coupling systems with a redundant parallel adjustment mechanism on the deployable antenna panel, a structural control integrated design methodology is proposed in this paper. Firstly, the modal information from the finite element model of the structure of the antenna panel is extracted, and then the mathematical model is established with the Hamilton principle; Secondly, the discrete Linear Quadratic Regulator (LQR controller is added to the model in order to control the actuators and adjust the shape of the panel. Finally, the engineering practicality of the modeling and control method based on finite element analysis simulation is verified.

  20. Regionalization of SWAT Model Parameters for Use in Ungauged Watersheds

    Directory of Open Access Journals (Sweden)

    Indrajeet Chaubey

    2010-11-01

    Full Text Available There has been a steady shift towards modeling and model-based approaches as primary methods of assessing watershed response to hydrologic inputs and land management, and of quantifying watershed-wide best management practice (BMP effectiveness. Watershed models often require some degree of calibration and validation to achieve adequate watershed and therefore BMP representation. This is, however, only possible for gauged watersheds. There are many watersheds for which there are very little or no monitoring data available, thus the question as to whether it would be possible to extend and/or generalize model parameters obtained through calibration of gauged watersheds to ungauged watersheds within the same region. This study explored the possibility of developing regionalized model parameter sets for use in ungauged watersheds. The study evaluated two regionalization methods: global averaging, and regression-based parameters, on the SWAT model using data from priority watersheds in Arkansas. Resulting parameters were tested and model performance determined on three gauged watersheds. Nash-Sutcliffe efficiencies (NS for stream flow obtained using regression-based parameters (0.53–0.83 compared well with corresponding values obtained through model calibration (0.45–0.90. Model performance obtained using global averaged parameter values was also generally acceptable (0.4 ≤ NS ≤ 0.75. Results from this study indicate that regionalized parameter sets for the SWAT model can be obtained and used for making satisfactory hydrologic response predictions in ungauged watersheds.

  1. On Approaches to Analyze the Sensitivity of Simulated Hydrologic Fluxes to Model Parameters in the Community Land Model

    Directory of Open Access Journals (Sweden)

    Jie Bao

    2015-12-01

    Full Text Available Effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash–Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA approaches, including analysis of variance based on the generalized linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.

  2. Bayesian estimation of parameters in a regional hydrological model

    Directory of Open Access Journals (Sweden)

    K. Engeland

    2002-01-01

    Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis

  3. Brownian motion model with stochastic parameters for asset prices

    Science.gov (United States)

    Ching, Soo Huei; Hin, Pooi Ah

    2013-09-01

    The Brownian motion model may not be a completely realistic model for asset prices because in real asset prices the drift μ and volatility σ may change over time. Presently we consider a model in which the parameter x = (μ,σ) is such that its value x (t + Δt) at a short time Δt ahead of the present time t depends on the value of the asset price at time t + Δt as well as the present parameter value x(t) and m-1 other parameter values before time t via a conditional distribution. The Malaysian stock prices are used to compare the performance of the Brownian motion model with fixed parameter with that of the model with stochastic parameter.

  4. Steps in the construction and verification of an explanatory model of psychosocial adjustment

    Directory of Open Access Journals (Sweden)

    Arantzazu Rodríguez-Fernández

    2016-06-01

    Full Text Available The aim of the present study was to empirically test an explanatory model of psychosocial adjustment during adolescence, with psychosocial adjustment during this stage being understood as a combination of school adjustment (or school engagement and subjective well-being. According to the hypothetic model, psychosocial adjustment depends on self-concept and resilience, which in turn act as mediators of the influence of perceived social support (from family, peers and teachers on this adjustment. Participants were 1250 secondary school students (638 girls and 612 boys aged between 12 and 15 years (Mean = 13.72; SD = 1.09. The results provided evidence of: (a the influence of all three types of perceived support on subject resilience and self-concept, with perceived family support being particularly important in this respect; (b the influence of the support received from teachers on school adjustment and support received from the family on psychological wellbeing; and (c the absence of any direct influence of peer support on psychosocial adjustment, although indirect influence was observed through the psychological variables studied. These results are discussed from an educational perspective and in terms of future research

  5. Steps in the construction and verification of an explanatory model of psychosocial adjustment

    Directory of Open Access Journals (Sweden)

    Arantzazu Rodríguez-Fernández

    2016-06-01

    Full Text Available The aim of the present study was to empirically test an explanatory model of psychosocial adjustment during adolescence, with psychosocial adjustment during this stage being understood as a combination of school adjustment (or school engagement and subjective well-being. According to the hypothetic model, psychosocial adjustment depends on self-concept and resilience, which in turn act as mediators of the influence of perceived social support (from family, peers and teachers on this adjustment. Participants were 1250 secondary school students (638 girls and 612 boys aged between 12 and 15 years (Mean = 13.72; SD = 1.09. The results provided evidence of: (a the influence of all three types of perceived support on subject resilience and self-concept, with perceived family support being particularly important in this respect; (b the influence of the support received from teachers on school adjustment and support received from the family on psychological wellbeing; and (c the absence of any direct influence of peer support on psychosocial adjustment, although indirect influence was observed through the psychological variables studied. These results are discussed from an educational perspective and in terms of future research.

  6. Estimation of shape model parameters for 3D surfaces

    DEFF Research Database (Denmark)

    Erbou, Søren Gylling Hemmingsen; Darkner, Sune; Fripp, Jurgen

    2008-01-01

    is applied to a database of 3D surfaces from a section of the porcine pelvic bone extracted from 33 CT scans. A leave-one-out validation shows that the parameters of the first 3 modes of the shape model can be predicted with a mean difference within [-0.01,0.02] from the true mean, with a standard deviation......Statistical shape models are widely used as a compact way of representing shape variation. Fitting a shape model to unseen data enables characterizing the data in terms of the model parameters. In this paper a Gauss-Newton optimization scheme is proposed to estimate shape model parameters of 3D...... surfaces using distance maps, which enables the estimation of model parameters without the requirement of point correspondence. For applications with acquisition limitations such as speed and cost, this formulation enables the fitting of a statistical shape model to arbitrarily sampled data. The method...

  7. Determination of the Corona model parameters with artificial neural networks

    International Nuclear Information System (INIS)

    Ahmet, Nayir; Bekir, Karlik; Arif, Hashimov

    2005-01-01

    Full text : The aim of this study is to calculate new model parameters taking into account the corona of electrical transmission line wires. For this purpose, a neural network modeling proposed for the corona frequent characteristics modeling. Then this model was compared with the other model developed at the Polytechnic Institute of Saint Petersburg. The results of development of the specified corona model for calculation of its influence on the wave processes in multi-wires line and determination of its parameters are submitted. Results of obtained calculation equations are brought for electrical transmission line with allowance for superficial effect in the ground and wires with reference to developed corona model

  8. Spatio-temporal modeling of nonlinear distributed parameter systems

    CERN Document Server

    Li, Han-Xiong

    2011-01-01

    The purpose of this volume is to provide a brief review of the previous work on model reduction and identifi cation of distributed parameter systems (DPS), and develop new spatio-temporal models and their relevant identifi cation approaches. In this book, a systematic overview and classifi cation on the modeling of DPS is presented fi rst, which includes model reduction, parameter estimation and system identifi cation. Next, a class of block-oriented nonlinear systems in traditional lumped parameter systems (LPS) is extended to DPS, which results in the spatio-temporal Wiener and Hammerstein s

  9. Some tests for parameter constancy in cointegrated VAR-models

    DEFF Research Database (Denmark)

    Hansen, Henrik; Johansen, Søren

    1999-01-01

    Some methods for the evaluation of parameter constancy in vector autoregressive (VAR) models are discussed. Two different ways of re-estimating the VAR model are proposed; one in which all parameters are estimated recursively based upon the likelihood function for the first observations, and anot...... be applied to test the constancy of the long-run parameters in the cointegrated VAR-model. All results are illustrated using a model for the term structure of interest rates on US Treasury securities. ...

  10. Determining extreme parameter correlation in ground water models

    DEFF Research Database (Denmark)

    Hill, Mary Cole; Østerby, Ole

    2003-01-01

    In ground water flow system models with hydraulic-head observations but without significant imposed or observed flows, extreme parameter correlation generally exists. As a result, hydraulic conductivity and recharge parameters cannot be uniquely estimated. In complicated problems, such correlation...... correlation coefficients with absolute values that round to 1.00 were good indicators of extreme parameter correlation, but smaller values were not necessarily good indicators of lack of correlation and resulting unique parameter estimates; (2) the SVD may be more difficult to interpret than parameter...

  11. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  12. Modeling and Dynamic Simulation of the Adjust and Control System Mechanism for Reactor CAREM-25

    International Nuclear Information System (INIS)

    Larreteguy, A.E; Mazufri, C.M

    2000-01-01

    The adjust and control system mechanism, MSAC, is an advanced, and in some senses unique, hydromechanical device.The efforts in modeling this mechanism are aimed to: Get a deep understanding of the physical phenomena involved,Identify the set of parameters relevant to the dynamics of the system,Allow the numerical simulation of the system,Predict the behavior of the mechanism in conditions other than that obtainable within the range of operation of the experimental setup (CEM), and Help in defining the design of the CAPEM (loop for testing the mechanism under high pressure/high temperature conditions).Thanks to the close interaction between the mechanics, the experimenters, and the modelists that compose the MSAC task force, it has been possible to suggest improvements, not only in the design of the mechanism, but also in the design and the operation of the pulse generator (GDP) and the rest of the CEM.This effort has led to a design mature enough so as to be tested in a high-pressure loop

  13. Modeling and Parameter Estimation of a Small Wind Generation System

    Directory of Open Access Journals (Sweden)

    Carlos A. Ramírez Gómez

    2013-11-01

    Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.

  14. Parameter estimation of variable-parameter nonlinear Muskingum model using excel solver

    Science.gov (United States)

    Kang, Ling; Zhou, Liwei

    2018-02-01

    Abstract . The Muskingum model is an effective flood routing technology in hydrology and water resources Engineering. With the development of optimization technology, more and more variable-parameter Muskingum models were presented to improve effectiveness of the Muskingum model in recent decades. A variable-parameter nonlinear Muskingum model (NVPNLMM) was proposed in this paper. According to the results of two real and frequently-used case studies by various models, the NVPNLMM could obtain better values of evaluation criteria, which are used to describe the superiority of the estimated outflows and compare the accuracies of flood routing using various models, and the optimal estimated outflows by the NVPNLMM were closer to the observed outflows than the ones by other models.

  15. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    2007-01-01

    This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil with focus on the horizontal sliding and rocking. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines......-parameter models with respect to the prediction of the maximum response during excitation and the geometrical damping related to free vibrations of a footing....

  16. The combined geodetic network adjusted on the reference ellipsoid – a comparison of three functional models for GNSS observations

    Directory of Open Access Journals (Sweden)

    Kadaj Roman

    2016-12-01

    Full Text Available The adjustment problem of the so-called combined (hybrid, integrated network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients. While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional

  17. The Psychosocial Adjustment of the Southeast Asian Refugee: An Overview of Empirical Findings and Theoretical Models.

    Science.gov (United States)

    Nicassio, Perry M.

    1985-01-01

    Summarizes clinical and research literature on Southeast Asian refugees' adjustment in the United States and proposes the adoption of theoretical models that may help explain individual differences. Reports that acculturation, learned helplessness, and stress management models appear to aid the conceptualizing of refugee problems and provide a…

  18. Modeling Quality-Adjusted Life Expectancy Loss Resulting from Tobacco Use in the United States

    Science.gov (United States)

    Kaplan, Robert M.; Anderson, John P.; Kaplan, Cameron M.

    2007-01-01

    Purpose: To describe the development of a model for estimating the effects of tobacco use upon Quality Adjusted Life Years (QALYs) and to estimate the impact of tobacco use on health outcomes for the United States (US) population using the model. Method: We obtained estimates of tobacco consumption from 6 years of the National Health Interview…

  19. Incorporating model parameter uncertainty into inverse treatment planning

    International Nuclear Information System (INIS)

    Lian Jun; Xing Lei

    2004-01-01

    Radiobiological treatment planning depends not only on the accuracy of the models describing the dose-response relation of different tumors and normal tissues but also on the accuracy of tissue specific radiobiological parameters in these models. Whereas the general formalism remains the same, different sets of model parameters lead to different solutions and thus critically determine the final plan. Here we describe an inverse planning formalism with inclusion of model parameter uncertainties. This is made possible by using a statistical analysis-based frameset developed by our group. In this formalism, the uncertainties of model parameters, such as the parameter a that describes tissue-specific effect in the equivalent uniform dose (EUD) model, are expressed by probability density function and are included in the dose optimization process. We found that the final solution strongly depends on distribution functions of the model parameters. Considering that currently available models for computing biological effects of radiation are simplistic, and the clinical data used to derive the models are sparse and of questionable quality, the proposed technique provides us with an effective tool to minimize the effect caused by the uncertainties in a statistical sense. With the incorporation of the uncertainties, the technique has potential for us to maximally utilize the available radiobiology knowledge for better IMRT treatment

  20. A method for model identification and parameter estimation

    International Nuclear Information System (INIS)

    Bambach, M; Heinkenschloss, M; Herty, M

    2013-01-01

    We propose and analyze a new method for the identification of a parameter-dependent model that best describes a given system. This problem arises, for example, in the mathematical modeling of material behavior where several competing constitutive equations are available to describe a given material. In this case, the models are differential equations that arise from the different constitutive equations, and the unknown parameters are coefficients in the constitutive equations. One has to determine the best-suited constitutive equations for a given material and application from experiments. We assume that the true model is one of the N possible parameter-dependent models. To identify the correct model and the corresponding parameters, we can perform experiments, where for each experiment we prescribe an input to the system and observe a part of the system state. Our approach consists of two stages. In the first stage, for each pair of models we determine the experiment, i.e. system input and observation, that best differentiates between the two models, and measure the distance between the two models. Then we conduct N(N − 1) or, depending on the approach taken, N(N − 1)/2 experiments and use the result of the experiments as well as the previously computed model distances to determine the true model. We provide sufficient conditions on the model distances and measurement errors which guarantee that our approach identifies the correct model. Given the model, we identify the corresponding model parameters in the second stage. The problem in the second stage is a standard parameter estimation problem and we use a method suitable for the given application. We illustrate our approach on three examples, including one where the models are elliptic partial differential equations with different parameterized right-hand sides and an example where we identify the constitutive equation in a problem from computational viscoplasticity. (paper)

  1. Emotional closeness to parents and grandparents: A moderated mediation model predicting adolescent adjustment.

    Science.gov (United States)

    Attar-Schwartz, Shalhevet

    2015-09-01

    Warm and emotionally close relationships with parents and grandparents have been found in previous studies to be linked with better adolescent adjustment. The present study, informed by Family Systems Theory and Intergenerational Solidarity Theory, uses a moderated mediation model analyzing the contribution of the dynamics of these intergenerational relationships to adolescent adjustment. Specifically, it examines the mediating role of emotional closeness to the closest grandparent in the relationship between emotional closeness to a parent (the offspring of the closest grandparent) and adolescent adjustment difficulties. The model also examines the moderating role of emotional closeness to parents in the relationship between emotional closeness to grandparents and adjustment difficulties. The study was based on a sample of 1,405 Jewish Israeli secondary school students (ages 12-18) who completed a structured questionnaire. It was found that emotional closeness to the closest grandparent was more strongly associated with reduced adjustment difficulties among adolescents with higher levels of emotional closeness to their parents. In addition, adolescent adjustment and emotional closeness to parents was partially mediated by emotional closeness to grandparents. Examining the family conditions under which adolescents' relationships with grandparents is stronger and more beneficial for them can help elucidate variations in grandparent-grandchild ties and expand our understanding of the mechanisms that shape child outcomes. (c) 2015 APA, all rights reserved).

  2. [A structural model for psychosocial adjustment in patients with early breast cancer].

    Science.gov (United States)

    Kim, Hye Young; So, Hyang Sook

    2012-02-01

    This study was done to propose a structural model to explain and predict psychosocial adjustment in patients with early breast cancer and to test the model. The model was based on the Stress-Coping Model of Lazarus and Folkman (1984). Data were collected from February 18 to March 18, 2009. For data analysis, 198 data sets were analyzed using SPSS/WIN12 and AMOS 7.0 version. Social support, uncertainty, symptom experience, and coping had statistically significant direct, indirect and total effects on psychosocial adjustment, and optimism had significant indirect and total effects on psychosocial adjustment. These variables explained 57% of total variance of the psychosocial adjustment in patients with early breast cancer. The results of the study indicate a need to enhance psychosocial adjustment of patients with early breast cancer by providing detailed structured information and various symptom alleviation programs to reduce perceived stresses such as uncertainty and symptom experience. They also suggest the need to establish support systems through participation of medical personnel and families in such programs, and to apply interventions strengthening coping methods to give the patients positive and optimistic beliefs.

  3. Conference Innovations in Derivatives Market : Fixed Income Modeling, Valuation Adjustments, Risk Management, and Regulation

    CERN Document Server

    Grbac, Zorana; Scherer, Matthias; Zagst, Rudi

    2016-01-01

    This book presents 20 peer-reviewed chapters on current aspects of derivatives markets and derivative pricing. The contributions, written by leading researchers in the field as well as experienced authors from the financial industry, present the state of the art in: • Modeling counterparty credit risk: credit valuation adjustment, debit valuation adjustment, funding valuation adjustment, and wrong way risk. • Pricing and hedging in fixed-income markets and multi-curve interest-rate modeling. • Recent developments concerning contingent convertible bonds, the measuring of basis spreads, and the modeling of implied correlations. The recent financial crisis has cast tremendous doubts on the classical view on derivative pricing. Now, counterparty credit risk and liquidity issues are integral aspects of a prudent valuation procedure and the reference interest rates are represented by a multitude of curves according to their different periods and maturities. A panel discussion included in the book (featuring D...

  4. Automatic parameter adjustment of difference of Gaussian (DoG) filter to improve OT-MACH filter performance for target recognition applications

    Science.gov (United States)

    Alkandri, Ahmad; Gardezi, Akber; Bangalore, Nagachetan; Birch, Philip; Young, Rupert; Chatwin, Chris

    2011-11-01

    A wavelet-modified frequency domain Optimal Trade-off Maximum Average Correlation Height (OT-MACH) filter has been trained using 3D CAD models and tested on real target images acquired from a Forward Looking Infra Red (FLIR) sensor. The OT-MACH filter can be used to detect and discriminate predefined targets from a cluttered background. The FLIR sensor extends the filter's ability by increasing the range of detection by exploiting the heat signature differences between the target and the background. A Difference of Gaussians (DoG) based wavelet filter has been use to improve the OT-MACH filter discrimination ability and distortion tolerance. Choosing the right standard deviation values of the two Gaussians comprising the filter is critical. In this paper we present a new technique for auto adjustment of the DoG filter parameters driven by the expected target size. Tests were carried on images acquired by the Apache AH-64 helicopter mounted FLIR sensor, results showing an overall improvement in the recognition of target objects present within the IR images.

  5. Optimal parameters for the FFA-Beddoes dynamic stall model

    Energy Technology Data Exchange (ETDEWEB)

    Bjoerck, A.; Mert, M. [FFA, The Aeronautical Research Institute of Sweden, Bromma (Sweden); Madsen, H.A. [Risoe National Lab., Roskilde (Denmark)

    1999-03-01

    Unsteady aerodynamic effects, like dynamic stall, must be considered in calculation of dynamic forces for wind turbines. Models incorporated in aero-elastic programs are of semi-empirical nature. Resulting aerodynamic forces therefore depend on values used for the semi-empiricial parameters. In this paper a study of finding appropriate parameters to use with the Beddoes-Leishman model is discussed. Minimisation of the `tracking error` between results from 2D wind tunnel tests and simulation with the model is used to find optimum values for the parameters. The resulting optimum parameters show a large variation from case to case. Using these different sets of optimum parameters in the calculation of blade vibrations, give rise to quite different predictions of aerodynamic damping which is discussed. (au)

  6. A distributed approach for parameters estimation in System Biology models

    International Nuclear Information System (INIS)

    Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.

    2009-01-01

    Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.

  7. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines and other models applied to fast evaluation of struct......This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines and other models applied to fast evaluation...... response during excitation and the geometrical damping related to free vibrations of a hexagonal footing. The optimal order of a lumped-parameter model is determined for each degree of freedom, i.e. horizontal and vertical translation as well as torsion and rocking. In particular, the necessity of coupling...

  8. Adjusting a cancer mortality-prediction model for disease status-related eligibility criteria

    Directory of Open Access Journals (Sweden)

    Kimmel Marek

    2011-05-01

    Full Text Available Abstract Background Volunteering participants in disease studies tend to be healthier than the general population partially due to specific enrollment criteria. Using modeling to accurately predict outcomes of cohort studies enrolling volunteers requires adjusting for the bias introduced in this way. Here we propose a new method to account for the effect of a specific form of healthy volunteer bias resulting from imposing disease status-related eligibility criteria, on disease-specific mortality, by explicitly modeling the length of the time interval between the moment when the subject becomes ineligible for the study, and the outcome. Methods Using survival time data from 1190 newly diagnosed lung cancer patients at MD Anderson Cancer Center, we model the time from clinical lung cancer diagnosis to death using an exponential distribution to approximate the length of this interval for a study where lung cancer death serves as the outcome. Incorporating this interval into our previously developed lung cancer risk model, we adjust for the effect of disease status-related eligibility criteria in predicting the number of lung cancer deaths in the control arm of CARET. The effect of the adjustment using the MD Anderson-derived approximation is compared to that based on SEER data. Results Using the adjustment developed in conjunction with our existing lung cancer model, we are able to accurately predict the number of lung cancer deaths observed in the control arm of CARET. Conclusions The resulting adjustment was accurate in predicting the lower rates of disease observed in the early years while still maintaining reasonable prediction ability in the later years of the trial. This method could be used to adjust for, or predict the duration and relative effect of any possible biases related to disease-specific eligibility criteria in modeling studies of volunteer-based cohorts.

  9. Transformations among CE–CVM model parameters for ...

    Indian Academy of Sciences (India)

    Unknown

    parameters which exclusively represent interactions of the higher order systems. Such a procedure is presen- ted in detail in this communication. Furthermore, the details of transformations required to express the model parameters in one basis from those defined in another basis for the same system are also presented.

  10. Transformations among CE–CVM model parameters for ...

    Indian Academy of Sciences (India)

    ... of parameters which exclusively represent interactions of the higher order systems. Such a procedure is presented in detail in this communication. Furthermore, the details of transformations required to express the model parameters in one basis from those defined in another basis for the same system are also presented.

  11. Prior distributions for item parameters in IRT models

    NARCIS (Netherlands)

    Matteucci, M.; S. Mignani, Prof.; Veldkamp, Bernard P.

    2012-01-01

    The focus of this article is on the choice of suitable prior distributions for item parameters within item response theory (IRT) models. In particular, the use of empirical prior distributions for item parameters is proposed. Firstly, regression trees are implemented in order to build informative

  12. Retrospective forecast of ETAS model with daily parameters estimate

    Science.gov (United States)

    Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang

    2016-04-01

    We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.

  13. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Kaylie Rasmuson; Kurt Rautenstrauch

    2003-06-20

    This analysis is one of nine technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. It documents input parameters for the biosphere model, and supports the use of the model to develop Biosphere Dose Conversion Factors (BDCF). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in the biosphere Technical Work Plan (TWP, BSC 2003a). It should be noted that some documents identified in Figure 1-1 may be under development and therefore not available at the time this document is issued. The ''Biosphere Model Report'' (BSC 2003b) describes the ERMYN and its input parameters. This analysis report, ANL-MGR-MD-000006, ''Agricultural and Environmental Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. This report defines and justifies values for twelve parameters required in the biosphere model. These parameters are related to use of contaminated groundwater to grow crops. The parameter values recommended in this report are used in the soil, plant, and carbon-14 submodels of the ERMYN.

  14. Nonlinear Adjustment Model with Integral and Its Application to Super Resolution Image Reconstruction

    Directory of Open Access Journals (Sweden)

    ZHU Jianjun

    2015-07-01

    Full Text Available The process of super resolution image reconstruction is such a process that multiple observations are taken on the same target to obtain low resolution images, then the low resolution images are used to reconstruct the real image of the target, namely high resolution image. This process is similar to that in the field of surveying and mapping, in which the same target is observed repeatedly and the optimal values is calculated with surveying adjustment methods. In this paper, the method of surveying adjustment is applied into super resolution image reconstruction. A integral nonlinear adjustment model for super resolution image reconstruction is proposed at first. And then the model is parameterized with a quadratic function. Finally the model is solved with the least squares adjustment method. Based on the proposed adjustment method, the specific strategy of image reconstruction is presented. This method for super resolution image reconstruction can make quantitative analysis of the results, and avoid successfully ill-condition problem, etc. The results show that, compared to the traditional method of super resolution image reconstruction, this method has greatly improved the visual effects, and the PSNR and SSIM has also greatly improved, so the method is reliable and feasible.

  15. Stochastic hyperelastic modeling considering dependency of material parameters

    Science.gov (United States)

    Caylak, Ismail; Penner, Eduard; Dridger, Alex; Mahnken, Rolf

    2018-03-01

    This paper investigates the uncertainty of a hyperelastic model by treating random material parameters as stochastic variables. For its stochastic discretization a polynomial chaos expansion (PCE) is used. An important aspect in our work is the consideration of stochastic dependencies in the stochastic modeling of Ogden's material model. To this end, artificial experiments are generated using the auto-regressive moving average process based on real experiments. The parameter identification for all data provides statistics of Ogden's material parameters, which are subsequently used for stochastic modeling. Stochastic dependencies are incorporated into the PCE using a Nataf transformation from dependent distributed random variables to independent standard normal distributed ones. The representative numerical example shows that our proposed method adequately takes into account the stochastic dependencies of Ogden's material parameters.

  16. A compact cyclic plasticity model with parameter evolution

    DEFF Research Database (Denmark)

    Krenk, Steen; Tidemann, L.

    2017-01-01

    by the Armstrong–Frederick model, contained as a special case of the present model for a particular choice of the shape parameter. In contrast to previous work, where shaping the stress-strain loops is derived from multiple internal stress states, this effect is here represented by a single parameter......The paper presents a compact model for cyclic plasticity based on energy in terms of external and internal variables, and plastic yielding described by kinematic hardening and a flow potential with an additive term controlling the nonlinear cyclic hardening. The model is basically described by five...... parameters: external and internal stiffness, a yield stress and a limiting ultimate stress, and finally a parameter controlling the gradual development of plastic deformation. Calibration against numerous experimental results indicates that typically larger plastic strains develop than predicted...

  17. Parameter Estimation for the Thurstone Case III Model.

    Science.gov (United States)

    Mackay, David B.; Chaiy, Seoil

    1982-01-01

    The ability of three estimation criteria to recover parameters of the Thurstone Case V and Case III models from comparative judgment data was investigated via Monte Carlo techniques. Significant differences in recovery are shown to exist. (Author/JKS)

  18. Improved parameter estimation for hydrological models using weighted object functions

    NARCIS (Netherlands)

    Stein, A.; Zaadnoordijk, W.J.

    1999-01-01

    This paper discusses the sensitivity of calibration of hydrological model parameters to different objective functions. Several functions are defined with weights depending upon the hydrological background. These are compared with an objective function based upon kriging. Calibration is applied to

  19. Partial sum approaches to mathematical parameters of some growth models

    Science.gov (United States)

    Korkmaz, Mehmet

    2016-04-01

    Growth model is fitted by evaluating the mathematical parameters, a, b and c. In this study, the method of partial sums were used. For finding the mathematical parameters, firstly three partial sums were used, secondly four partial sums were used, thirdly five partial sums were used and finally N partial sums were used. The purpose of increasing the partial decomposition is to produce a better phase model which gives a better expected value by minimizing error sum of squares in the interval used.

  20. Parameter estimation in stochastic rainfall-runoff models

    DEFF Research Database (Denmark)

    Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur

    2006-01-01

    A parameter estimation method for stochastic rainfall-runoff models is presented. The model considered in the paper is a conceptual stochastic model, formulated in continuous-discrete state space form. The model is small and a fully automatic optimization is, therefore, possible for estimating all....... For a comparison the parameters are also estimated by an output error method, where the sum of squared simulation error is minimized. The former methodology is optimal for short-term prediction whereas the latter is optimal for simulations. Hence, depending on the purpose it is possible to select whether...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...

  1. Agricultural and Environmental Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    K. Rasmuson; K. Rautenstrauch

    2004-01-01

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters

  2. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four

  3. On-board adaptive model for state of charge estimation of lithium-ion batteries based on Kalman filter with proportional integral-based error adjustment

    Science.gov (United States)

    Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai

    2017-10-01

    With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.

  4. SPOTting Model Parameters Using a Ready-Made Python Package.

    Directory of Open Access Journals (Sweden)

    Tobias Houska

    Full Text Available The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool, an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI. We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  5. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  6. Updating parameters of the chicken processing line model

    DEFF Research Database (Denmark)

    Kurowicka, Dorota; Nauta, Maarten; Jozwiak, Katarzyna

    2010-01-01

    A mathematical model of chicken processing that quantitatively describes the transmission of Campylobacter on chicken carcasses from slaughter to chicken meat product has been developed in Nauta et al. (2005). This model was quantified with expert judgment. Recent availability of data allows...... updating parameters of the model to better describe processes observed in slaughterhouses. We propose Bayesian updating as a suitable technique to update expert judgment with microbiological data. Berrang and Dickens’s data are used to demonstrate performance of this method in updating parameters...... of the chicken processing line model....

  7. Lumped-Parameter Models for Windturbine Footings on Layered Ground

    DEFF Research Database (Denmark)

    Andersen, Lars

    The design of modern wind turbines is typically based on lifetime analyses using aeroelastic codes. In this regard, the impedance of the foundations must be described accurately without increasing the overall size of the computationalmodel significantly. This may be obtained by the fitting...... of a lumped-parameter model to the results of a rigorous model or experimental results. In this paper, guidelines are given for the formulation of such lumped-parameter models and examples are given in which the models are utilised for the analysis of a wind turbine supported by a surface footing on a layered...

  8. Parameter estimation and model selection in computational biology.

    Directory of Open Access Journals (Sweden)

    Gabriele Lillacci

    2010-03-01

    Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.

  9. Testing a social ecological model for relations between political violence and child adjustment in Northern Ireland.

    Science.gov (United States)

    Cummings, E Mark; Merrilees, Christine E; Schermerhorn, Alice C; Goeke-Morey, Marcie C; Shirlow, Peter; Cairns, Ed

    2010-05-01

    Relations between political violence and child adjustment are matters of international concern. Past research demonstrates the significance of community, family, and child psychological processes in child adjustment, supporting study of interrelations between multiple social ecological factors and child adjustment in contexts of political violence. Testing a social ecological model, 300 mothers and their children (M = 12.28 years, SD = 1.77) from Catholic and Protestant working class neighborhoods in Belfast, Northern Ireland, completed measures of community discord, family relations, and children's regulatory processes (i.e., emotional security) and outcomes. Historical political violence in neighborhoods based on objective records (i.e., politically motivated deaths) were related to family members' reports of current sectarian antisocial behavior and nonsectarian antisocial behavior. Interparental conflict and parental monitoring and children's emotional security about both the community and family contributed to explanatory pathways for relations between sectarian antisocial behavior in communities and children's adjustment problems. The discussion evaluates support for social ecological models for relations between political violence and child adjustment and its implications for understanding relations in other parts of the world.

  10. Towards an Integrated Conceptual Model of International Student Adjustment and Adaptation

    Science.gov (United States)

    Schartner, Alina; Young, Tony Johnstone

    2016-01-01

    Despite a burgeoning body of empirical research on "the international student experience", the area remains under-theorized. The literature to date lacks a guiding conceptual model that captures the adjustment and adaptation trajectories of this unique, growing, and important sojourner group. In this paper, we therefore put forward a…

  11. 10 km running performance predicted by a multiple linear regression model with allometrically adjusted variables.

    Science.gov (United States)

    Abad, Cesar C C; Barros, Ronaldo V; Bertuzzi, Romulo; Gagliardi, João F L; Lima-Silva, Adriano E; Lambert, Mike I; Pires, Flavio O

    2016-06-01

    The aim of this study was to verify the power of VO 2max , peak treadmill running velocity (PTV), and running economy (RE), unadjusted or allometrically adjusted, in predicting 10 km running performance. Eighteen male endurance runners performed: 1) an incremental test to exhaustion to determine VO 2max and PTV; 2) a constant submaximal run at 12 km·h -1 on an outdoor track for RE determination; and 3) a 10 km running race. Unadjusted (VO 2max , PTV and RE) and adjusted variables (VO 2max 0.72 , PTV 0.72 and RE 0.60 ) were investigated through independent multiple regression models to predict 10 km running race time. There were no significant correlations between 10 km running time and either the adjusted or unadjusted VO 2max . Significant correlations (p 0.84 and power > 0.88. The allometrically adjusted predictive model was composed of PTV 0.72 and RE 0.60 and explained 83% of the variance in 10 km running time with a standard error of the estimate (SEE) of 1.5 min. The unadjusted model composed of a single PVT accounted for 72% of the variance in 10 km running time (SEE of 1.9 min). Both regression models provided powerful estimates of 10 km running time; however, the unadjusted PTV may provide an uncomplicated estimation.

  12. A Threshold Model of Social Support, Adjustment, and Distress after Breast Cancer Treatment

    Science.gov (United States)

    Mallinckrodt, Brent; Armer, Jane M.; Heppner, P. Paul

    2012-01-01

    This study examined a threshold model that proposes that social support exhibits a curvilinear association with adjustment and distress, such that support in excess of a critical threshold level has decreasing incremental benefits. Women diagnosed with a first occurrence of breast cancer (N = 154) completed survey measures of perceived support…

  13. Real-time adjustment of pressure to demand in water distribution systems: Parameter-less P-controller algorithm

    CSIR Research Space (South Africa)

    Page, Philip R

    2016-08-01

    Full Text Available Remote real-time control is currently the most advanced form of pressure management. Here the parameters describing pressure control valves (or pumps) are changed in real-time in such a way to provide the most optimal pressure in the water...

  14. Development of new model for high explosives detonation parameters calculation

    Directory of Open Access Journals (Sweden)

    Jeremić Radun

    2012-01-01

    Full Text Available The simple semi-empirical model for calculation of detonation pressure and velocity for CHNO explosives has been developed, which is based on experimental values of detonation parameters. Model uses Avakyan’s method for determination of detonation products' chemical composition, and is applicable in wide range of densities. Compared with the well-known Kamlet's method and numerical model of detonation based on BKW EOS, the calculated values from proposed model have significantly better accuracy.

  15. An Iterative Optimization Algorithm for Lens Distortion Correction Using Two-Parameter Models

    Directory of Open Access Journals (Sweden)

    Daniel Santana-Cedrés

    2016-12-01

    Full Text Available We present a method for the automatic estimation of two-parameter radial distortion models, considering polynomial as well as division models. The method first detects the longest distorted lines within the image by applying the Hough transform enriched with a radial distortion parameter. From these lines, the first distortion parameter is estimated, then we initialize the second distortion parameter to zero and the two-parameter model is embedded into an iterative nonlinear optimization process to improve the estimation. This optimization aims at reducing the distance from the edge points to the lines, adjusting two distortion parameters as well as the coordinates of the center of distortion. Furthermore, this allows detecting more points belonging to the distorted lines, so that the Hough transform is iteratively repeated to extract a better set of lines until no improvement is achieved. We present some experiments on real images with significant distortion to show the ability of the proposed approach to automatically correct this type of distortion as well as a comparison between the polynomial and division models.

  16. Environmental Transport Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. Wasiolek

    2004-01-01

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573])

  17. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-10

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis

  18. Rational Multi-curve Models with Counterparty-risk Valuation Adjustments

    DEFF Research Database (Denmark)

    Crépey, Stéphane; Macrina, Andrea; Nguyen, Tuyet Mai

    2016-01-01

    We develop a multi-curve term structure set-up in which the modelling ingredients are expressed by rational functionals of Markov processes. We calibrate to London Interbank Offer Rate swaptions data and show that a rational two-factor log-normal multi-curve model is sufficient to match market da...... with regulatory obligations. In order to compute counterparty-risk valuation adjustments, such as credit valuation adjustment, we show how default intensity processes with rational form can be derived. We flesh out our study by applying the results to a basis swap contract....... with accuracy. We elucidate the relationship between the models developed and calibrated under a risk-neutral measure Q and their consistent equivalence class under the real-world probability measure P. The consistent P-pricing models are applied to compute the risk exposures which may be required to comply...

  19. Parameter uncertainty analysis of a biokinetic model of caesium

    International Nuclear Information System (INIS)

    Li, W.B.; Oeh, U.; Klein, W.; Blanchardon, E.; Puncher, M.; Leggett, R.W.; Breustedt, B.; Nosske, D.; Lopez, M.A.

    2015-01-01

    Parameter uncertainties for the biokinetic model of caesium (Cs) developed by Leggett et al. were inventoried and evaluated. The methods of parameter uncertainty analysis were used to assess the uncertainties of model predictions with the assumptions of model parameter uncertainties and distributions. Furthermore, the importance of individual model parameters was assessed by means of sensitivity analysis. The calculated uncertainties of model predictions were compared with human data of Cs measured in blood and in the whole body. It was found that propagating the derived uncertainties in model parameter values reproduced the range of bioassay data observed in human subjects at different times after intake. The maximum ranges, expressed as uncertainty factors (UFs) (defined as a square root of ratio between 97.5. and 2.5. percentiles) of blood clearance, whole-body retention and urinary excretion of Cs predicted at earlier time after intake were, respectively: 1.5, 1.0 and 2.5 at the first day; 1.8, 1.1 and 2.4 at Day 10 and 1.8, 2.0 and 1.8 at Day 100; for the late times (1000 d) after intake, the UFs were increased to 43, 24 and 31, respectively. The model parameters of transfer rates between kidneys and blood, muscle and blood and the rate of transfer from kidneys to urinary bladder content are most influential to the blood clearance and to the whole-body retention of Cs. For the urinary excretion, the parameters of transfer rates from urinary bladder content to urine and from kidneys to urinary bladder content impact mostly. The implication and effect on the estimated equivalent and effective doses of the larger uncertainty of 43 in whole-body retention in the later time, say, after Day 500 will be explored in a successive work in the framework of EURADOS. (authors)

  20. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    K. Rautenstrauch

    2004-01-01

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception

  1. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rautenstrauch

    2004-09-10

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.

  2. Environmental Transport Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Wasiolek, M. A.

    2003-01-01

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699], Section 6.2). Parameter values

  3. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-06-27

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699

  4. Sensor placement for calibration of spatially varying model parameters

    Science.gov (United States)

    Nath, Paromita; Hu, Zhen; Mahadevan, Sankaran

    2017-08-01

    This paper presents a sensor placement optimization framework for the calibration of spatially varying model parameters. To account for the randomness of the calibration parameters over space and across specimens, the spatially varying parameter is represented as a random field. Based on this representation, Bayesian calibration of spatially varying parameter is investigated. To reduce the required computational effort during Bayesian calibration, the original computer simulation model is substituted with Kriging surrogate models based on the singular value decomposition (SVD) of the model response and the Karhunen-Loeve expansion (KLE) of the spatially varying parameters. A sensor placement optimization problem is then formulated based on the Bayesian calibration to maximize the expected information gain measured by the expected Kullback-Leibler (K-L) divergence. The optimization problem needs to evaluate the expected K-L divergence repeatedly which requires repeated calibration of the spatially varying parameter, and this significantly increases the computational effort of solving the optimization problem. To overcome this challenge, an approximation for the posterior distribution is employed within the optimization problem to facilitate the identification of the optimal sensor locations using the simulated annealing algorithm. A heat transfer problem with spatially varying thermal conductivity is used to demonstrate the effectiveness of the proposed method.

  5. Implied Adjusted Volatility by Leland Option Pricing Models: Evidence from Australian Index Options

    OpenAIRE

    Mimi Hafizah Abdullah; Hanani Farhah Harun; Nik Ruzni Nik Idris

    2014-01-01

    With the implied volatility as an important factor in financial decision-making, in particular in option pricing valuation, and also the given fact that the pricing biases of Leland option pricing models and the implied volatility structure for the options are related, this study considers examining the implied adjusted volatility smile patterns and term structures in the S&P/ASX 200 index options using the different Leland option pricing models. The examination of the im...

  6. Procedures for parameter estimates of computational models for localized failure

    NARCIS (Netherlands)

    Iacono, C.

    2007-01-01

    In the last years, many computational models have been developed for tensile fracture in concrete. However, their reliability is related to the correct estimate of the model parameters, not all directly measurable during laboratory tests. Hence, the development of inverse procedures is needed, that

  7. Geometry parameters for musculoskeletal modelling of the shoulder system

    NARCIS (Netherlands)

    Van der Helm, F C; Veeger, DirkJan (H. E. J.); Pronk, G M; Van der Woude, L H; Rozendal, R H

    A dynamical finite-element model of the shoulder mechanism consisting of thorax, clavicula, scapula and humerus is outlined. The parameters needed for the model are obtained in a cadaver experiment consisting of both shoulders of seven cadavers. In this paper, in particular, the derivation of

  8. A software for parameter estimation in dynamic models

    Directory of Open Access Journals (Sweden)

    M. Yuceer

    2008-12-01

    Full Text Available A common problem in dynamic systems is to determine parameters in an equation used to represent experimental data. The goal is to determine the values of model parameters that provide the best fit to measured data, generally based on some type of least squares or maximum likelihood criterion. In the most general case, this requires the solution of a nonlinear and frequently non-convex optimization problem. Some of the available software lack in generality, while others do not provide ease of use. A user-interactive parameter estimation software was needed for identifying kinetic parameters. In this work we developed an integration based optimization approach to provide a solution to such problems. For easy implementation of the technique, a parameter estimation software (PARES has been developed in MATLAB environment. When tested with extensive example problems from literature, the suggested approach is proven to provide good agreement between predicted and observed data within relatively less computing time and iterations.

  9. Improving the realism of hydrologic model through multivariate parameter estimation

    Science.gov (United States)

    Rakovec, Oldrich; Kumar, Rohini; Attinger, Sabine; Samaniego, Luis

    2017-04-01

    Increased availability and quality of near real-time observations should improve understanding of predictive skills of hydrological models. Recent studies have shown the limited capability of river discharge data alone to adequately constrain different components of distributed model parameterizations. In this study, the GRACE satellite-based total water storage (TWS) anomaly is used to complement the discharge data with an aim to improve the fidelity of mesoscale hydrologic model (mHM) through multivariate parameter estimation. The study is conducted in 83 European basins covering a wide range of hydro-climatic regimes. The model parameterization complemented with the TWS anomalies leads to statistically significant improvements in (1) discharge simulations during low-flow period, and (2) evapotranspiration estimates which are evaluated against independent (FLUXNET) data. Overall, there is no significant deterioration in model performance for the discharge simulations when complemented by information from the TWS anomalies. However, considerable changes in the partitioning of precipitation into runoff components are noticed by in-/exclusion of TWS during the parameter estimation. A cross-validation test carried out to assess the transferability and robustness of the calibrated parameters to other locations further confirms the benefit of complementary TWS data. In particular, the evapotranspiration estimates show more robust performance when TWS data are incorporated during the parameter estimation, in comparison with the benchmark model constrained against discharge only. This study highlights the value for incorporating multiple data sources during parameter estimation to improve the overall realism of hydrologic model and its applications over large domains. Rakovec, O., Kumar, R., Attinger, S. and Samaniego, L. (2016): Improving the realism of hydrologic model functioning through multivariate parameter estimation. Water Resour. Res., 52, http://dx.doi.org/10

  10. Ground level enhancement (GLE) energy spectrum parameters model

    Science.gov (United States)

    Qin, G.; Wu, S.

    2017-12-01

    We study the ground level enhancement (GLE) events in solar cycle 23 with the four energy spectra parameters, the normalization parameter C, low-energy power-law slope γ 1, high-energy power-law slope γ 2, and break energy E0, obtained by Mewaldt et al. 2012 who fit the observations to the double power-law equation. we divide the GLEs into two groups, one with strong acceleration by interplanetary (IP) shocks and another one without strong acceleration according to the condition of solar eruptions. We next fit the four parameters with solar event conditions to get models of the parameters for the two groups of GLEs separately. So that we would establish a model of energy spectrum for GLEs for the future space weather prediction.

  11. Determination of appropriate models and parameters for premixing calculations

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ik-Kyu; Kim, Jong-Hwan; Min, Beong-Tae; Hong, Seong-Wan

    2008-03-15

    The purpose of the present work is to use experiments that have been performed at Forschungszentrum Karlsruhe during about the last ten years for determining the most appropriate models and parameters for premixing calculations. The results of a QUEOS experiment are used to fix the parameters concerning heat transfer. The QUEOS experiments are especially suited for this purpose as they have been performed with small hot solid spheres. Therefore the area of heat exchange is known. With the heat transfer parameters fixed in this way, a PREMIX experiment is recalculated. These experiments have been performed with molten alumina (Al{sub 2}O{sub 3}) as a simulant of corium. Its initial temperature is 2600 K. With these experiments the models and parameters for jet and drop break-up are tested.

  12. Soil-related Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    A. J. Smith

    2003-01-01

    This analysis is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the geologic repository at Yucca Mountain. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN biosphere model is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003 [163602]). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. ''The Biosphere Model Report'' (BSC 2003 [160699]) describes in detail the conceptual model as well as the mathematical model and its input parameters. The purpose of this analysis was to develop the biosphere model parameters needed to evaluate doses from pathways associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation and ash

  13. Parameter Estimation for Single Diode Models of Photovoltaic Modules

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Photovoltaic and Distributed Systems Integration Dept.

    2015-03-01

    Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.

  14. Evolution Scenarios at the Romanian Economy Level, Using the R.M. Solow Adjusted Model

    Directory of Open Access Journals (Sweden)

    Stelian Stancu

    2008-06-01

    Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans model etc., the R.M. Solow model is part of the category which characterizes the economic growth. The paper proposes the presentation of the R.M. Solow adjusted model with specific simulation characteristics and economic growth scenario. Considering these aspects, there are presented the values obtained at the economy level, behind the simulations, about the ratio Capital on the output volume, Output volume on employee, equal with the current labour efficiency, as well as the Labour efficiency value.

  15. Modeling Chinese ionospheric layer parameters based on EOF analysis

    Science.gov (United States)

    Yu, You; Wan, Weixing

    2016-04-01

    Using 24-ionosonde observations in and around China during the 20th solar cycle, an assimilative model is constructed to map the ionospheric layer parameters (foF2, hmF2, M(3000)F2, and foE) over China based on empirical orthogonal function (EOF) analysis. First, we decompose the background maps from the International Reference Ionosphere model 2007 (IRI-07) into different EOF modes. The obtained EOF modes consist of two factors: the EOF patterns and the corresponding EOF amplitudes. These two factors individually reflect the spatial distributions (e.g., the latitudinal dependence such as the equatorial ionization anomaly structure and the longitude structure with east-west difference) and temporal variations on different time scales (e.g., solar cycle, annual, semiannual, and diurnal variations) of the layer parameters. Then, the EOF patterns and long-term observations of ionosondes are assimilated to get the observed EOF amplitudes, which are further used to construct the Chinese Ionospheric Maps (CIMs) of the layer parameters. In contrast with the IRI-07 model, the mapped CIMs successfully capture the inherent temporal and spatial variations of the ionospheric layer parameters. Finally, comparison of the modeled (EOF and IRI-07 model) and observed values reveals that the EOF model reproduces the observation with smaller root-mean-square errors and higher linear correlation co- efficients. In addition, IRI discrepancy at the low latitude especially for foF2 is effectively removed by EOF model.

  16. Parameters and variables appearing in repository design models

    International Nuclear Information System (INIS)

    Curtis, R.H.; Wart, R.J.

    1983-12-01

    This report defines the parameters and variables appearing in repository design models and presents typical values and ranges of values of each. Areas covered by this report include thermal, geomechanical, and coupled stress and flow analyses in rock. Particular emphasis is given to conductivity, radiation, and convection parameters for thermal analysis and elastic constants, failure criteria, creep laws, and joint properties for geomechanical analysis. The data in this report were compiled to help guide the selection of values of parameters and variables to be used in code benchmarking. 102 references, 33 figures, 51 tables

  17. A lumped parameter, low dimension model of heat exchanger

    International Nuclear Information System (INIS)

    Kanoh, Hideaki; Furushoo, Junji; Masubuchi, Masami

    1980-01-01

    This paper reports on the results of investigation of the distributed parameter model, the difference model, and the model of the method of weighted residuals for heat exchangers. By the method of weighted residuals (MWR), the opposite flow heat exchanger system is approximated by low dimension, lumped parameter model. By assuming constant specific heat, constant density, the same form of tube cross-section, the same form of the surface of heat exchange, uniform flow velocity, the linear relation of heat transfer to flow velocity, liquid heat carrier, and the thermal insulation of liquid from outside, fundamental equations are obtained. The experimental apparatus was made of acrylic resin. The response of the temperature at the exit of first liquid to the variation of the flow rate of second liquid was measured and compared with the models. The MWR model shows good approximation for the low frequency region, and as the number of division increases, good approximation spreads to higher frequency region. (Kato, T.)

  18. Control of the SCOLE configuration using distributed parameter models

    Science.gov (United States)

    Hsiao, Min-Hung; Huang, Jen-Kuang

    1994-01-01

    A continuum model for the SCOLE configuration has been derived using transfer matrices. Controller designs for distributed parameter systems have been analyzed. Pole-assignment controller design is considered easy to implement but stability is not guaranteed. An explicit transfer function of dynamic controllers has been obtained and no model reduction is required before the controller is realized. One specific LQG controller for continuum models had been derived, but other optimal controllers for more general performances need to be studied.

  19. Evaluation of the DAVROS (Development And Validation of Risk-adjusted Outcomes for Systems of emergency care) risk-adjustment model as a quality indicator for healthcare.

    Science.gov (United States)

    Wilson, Richard; Goodacre, Steve W; Klingbajl, Marcin; Kelly, Anne-Maree; Rainer, Tim; Coats, Tim; Holloway, Vikki; Townend, Will; Crane, Steve

    2014-06-01

    Risk-adjusted mortality rates can be used as a quality indicator if it is assumed that the discrepancy between predicted and actual mortality can be attributed to the quality of healthcare (ie, the model has attributional validity). The Development And Validation of Risk-adjusted Outcomes for Systems of emergency care (DAVROS) model predicts 7-day mortality in emergency medical admissions. We aimed to test this assumption by evaluating the attributional validity of the DAVROS risk-adjustment model. We selected cases that had the greatest discrepancy between observed mortality and predicted probability of mortality from seven hospitals involved in validation of the DAVROS risk-adjustment model. Reviewers at each hospital assessed hospital records to determine whether the discrepancy between predicted and actual mortality could be explained by the healthcare provided. We received 232/280 (83%) completed review forms relating to 179 unexpected deaths and 53 unexpected survivors. The healthcare system was judged to have potentially contributed to 10/179 (8%) of the unexpected deaths and 26/53 (49%) of the unexpected survivors. Failure of the model to appropriately predict risk was judged to be responsible for 135/179 (75%) of the unexpected deaths and 2/53 (4%) of the unexpected survivors. Some 10/53 (19%) of the unexpected survivors died within a few months of the 7-day period of model prediction. We found little evidence that deaths occurring in patients with a low predicted mortality from risk-adjustment could be attributed to the quality of healthcare provided. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  20. SPOTting model parameters using a ready-made Python package

    Science.gov (United States)

    Houska, Tobias; Kraft, Philipp; Breuer, Lutz

    2015-04-01

    The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for

  1. Modelling of intermittent microwave convective drying: parameter sensitivity

    Directory of Open Access Journals (Sweden)

    Zhang Zhijun

    2017-06-01

    Full Text Available The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  2. Modelling of intermittent microwave convective drying: parameter sensitivity

    Science.gov (United States)

    Zhang, Zhijun; Qin, Wenchao; Shi, Bin; Gao, Jingxin; Zhang, Shiwei

    2017-06-01

    The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  3. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    Science.gov (United States)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  4. On the role of modeling parameters in IMRT plan optimization

    International Nuclear Information System (INIS)

    Krause, Michael; Scherrer, Alexander; Thieke, Christian

    2008-01-01

    The formulation of optimization problems in intensity-modulated radiotherapy (IMRT) planning comprises the choice of various values such as function-specific parameters or constraint bounds. In current inverse planning programs that yield a single treatment plan for each optimization, it is often unclear how strongly these modeling parameters affect the resulting plan. This work investigates the mathematical concepts of elasticity and sensitivity to deal with this problem. An artificial planning case with a horse-shoe formed target with different opening angles surrounding a circular risk structure is studied. As evaluation functions the generalized equivalent uniform dose (EUD) and the average underdosage below and average overdosage beyond certain dose thresholds are used. A single IMRT plan is calculated for an exemplary parameter configuration. The elasticity and sensitivity of each parameter are then calculated without re-optimization, and the results are numerically verified. The results show the following. (1) elasticity can quantify the influence of a modeling parameter on the optimization result in terms of how strongly the objective function value varies under modifications of the parameter value. It also can describe how strongly the geometry of the involved planning structures affects the optimization result. (2) Based on the current parameter settings and corresponding treatment plan, sensitivity analysis can predict the optimization result for modified parameter values without re-optimization, and it can estimate the value intervals in which such predictions are valid. In conclusion, elasticity and sensitivity can provide helpful tools in inverse IMRT planning to identify the most critical parameters of an individual planning problem and to modify their values in an appropriate way

  5. Assessment of Lumped-Parameter Models for Rigid Footings

    DEFF Research Database (Denmark)

    Andersen, Lars

    2010-01-01

    The quality of consistent lumped-parameter models of rigid footings is examined. Emphasis is put on the maximum response during excitation and the geometrical damping related to free vibrations. The optimal order of a lumped-parameter model is determined for each degree of freedom, i.e. horizontal...... and vertical translations as well as torsion and rocking, and the necessity of coupling between horizontal sliding and rocking is discussed. Most of the analyses are carried out for hexagonal footings; but in order to generalise the conclusions to a broader variety of footings, comparisons are made...... with the response of circular and square foundations....

  6. Climate change decision-making: Model & parameter uncertainties explored

    Energy Technology Data Exchange (ETDEWEB)

    Dowlatabadi, H.; Kandlikar, M.; Linville, C.

    1995-12-31

    A critical aspect of climate change decision-making is uncertainties in current understanding of the socioeconomic, climatic and biogeochemical processes involved. Decision-making processes are much better informed if these uncertainties are characterized and their implications understood. Quantitative analysis of these uncertainties serve to inform decision makers about the likely outcome of policy initiatives, and help set priorities for research so that outcome ambiguities faced by the decision-makers are reduced. A family of integrated assessment models of climate change have been developed at Carnegie Mellon. These models are distinguished from other integrated assessment efforts in that they were designed from the outset to characterize and propagate parameter, model, value, and decision-rule uncertainties. The most recent of these models is ICAM 2.1. This model includes representation of the processes of demographics, economic activity, emissions, atmospheric chemistry, climate and sea level change and impacts from these changes and policies for emissions mitigation, and adaptation to change. The model has over 800 objects of which about one half are used to represent uncertainty. In this paper we show, that when considering parameter uncertainties, the relative contribution of climatic uncertainties are most important, followed by uncertainties in damage calculations, economic uncertainties and direct aerosol forcing uncertainties. When considering model structure uncertainties we find that the choice of policy is often dominated by model structure choice, rather than parameter uncertainties.

  7. Parameter estimation in nonlinear models for pesticide degradation

    International Nuclear Information System (INIS)

    Richter, O.; Pestemer, W.; Bunte, D.; Diekkrueger, B.

    1991-01-01

    A wide class of environmental transfer models is formulated as ordinary or partial differential equations. With the availability of fast computers, the numerical solution of large systems became feasible. The main difficulty in performing a realistic and convincing simulation of the fate of a substance in the biosphere is not the implementation of numerical techniques but rather the incomplete data basis for parameter estimation. Parameter estimation is a synonym for statistical and numerical procedures to derive reasonable numerical values for model parameters from data. The classical method is the familiar linear regression technique which dates back to the 18th century. Because it is easy to handle, linear regression has long been established as a convenient tool for analysing relationships. However, the wide use of linear regression has led to an overemphasis of linear relationships. In nature, most relationships are nonlinear and linearization often gives a poor approximation of reality. Furthermore, pure regression models are not capable to map the dynamics of a process. Therefore, realistic models involve the evolution in time (and space). This leads in a natural way to the formulation of differential equations. To establish the link between data and dynamical models, numerical advanced parameter identification methods have been developed in recent years. This paper demonstrates the application of these techniques to estimation problems in the field of pesticide dynamics. (7 refs., 5 figs., 2 tabs.)

  8. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. Wasiolek

    2006-01-01

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This report is concerned primarily with the

  9. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2006-06-05

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This

  10. Glacial isostatic adjustment at the Laurentide ice sheet margin: Models and observations in the Great Lakes region

    Science.gov (United States)

    Braun, Alexander; Kuo, Chung-Yen; Shum, C. K.; Wu, Patrick; van der Wal, Wouter; Fotopoulos, Georgia

    2008-10-01

    Glacial Isostatic Adjustment (GIA) modelling in North America relies on relative sea level information which is primarily obtained from areas far away from the uplift region. The lack of accurate geodetic observations in the Great Lakes region, which is located in the transition zone between uplift and subsidence due to the deglaciation of the Laurentide ice sheet, has prevented more detailed studies of this former margin of the ice sheet. Recently, observations of vertical crustal motion from improved GPS network solutions and combined tide gauge and satellite altimetry solutions have become available. This study compares these vertical motion observations with predictions obtained from 70 different GIA models. The ice sheet margin is distinct from the centre and far field of the uplift because the sensitivity of the GIA process towards Earth parameters such as mantle viscosity is very different. Specifically, the margin area is most sensitive to the uppermost mantle viscosity and allows for better constraints of this parameter. The 70 GIA models compared herein have different ice loading histories (ICE-3/4/5G) and Earth parameters including lateral heterogeneities. The root-mean-square differences between the 6 best models and the two sets of observations (tide gauge/altimetry and GPS) are 0.66 and 1.57 mm/yr, respectively. Both sets of independent observations are highly correlated and show a very similar fit to the models, which indicates their consistent quality. Therefore, both data sets can be considered as a means for constraining and assessing the quality of GIA models in the Great Lakes region and the former margin of the Laurentide ice sheet.

  11. [Applying temporally-adjusted land use regression models to estimate ambient air pollution exposure during pregnancy].

    Science.gov (United States)

    Zhang, Y J; Xue, F X; Bai, Z P

    2017-03-06

    The impact of maternal air pollution exposure on offspring health has received much attention. Precise and feasible exposure estimation is particularly important for clarifying exposure-response relationships and reducing heterogeneity among studies. Temporally-adjusted land use regression (LUR) models are exposure assessment methods developed in recent years that have the advantage of having high spatial-temporal resolution. Studies on the health effects of outdoor air pollution exposure during pregnancy have been increasingly carried out using this model. In China, research applying LUR models was done mostly at the model construction stage, and findings from related epidemiological studies were rarely reported. In this paper, the sources of heterogeneity and research progress of meta-analysis research on the associations between air pollution and adverse pregnancy outcomes were analyzed. The methods of the characteristics of temporally-adjusted LUR models were introduced. The current epidemiological studies on adverse pregnancy outcomes that applied this model were systematically summarized. Recommendations for the development and application of LUR models in China are presented. This will encourage the implementation of more valid exposure predictions during pregnancy in large-scale epidemiological studies on the health effects of air pollution in China.

  12. Identifiability and error minimization of receptor model parameters with PET

    International Nuclear Information System (INIS)

    Delforge, J.; Syrota, A.; Mazoyer, B.M.

    1989-01-01

    The identifiability problem and the general framework for experimental design optimization are presented. The methodology is applied to the problem of the receptor-ligand model parameter estimation with dynamic positron emission tomography data. The first attempts to identify the model parameters from data obtained with a single tracer injection led to disappointing numerical results. The possibility of improving parameter estimation using a new experimental design combining an injection of the labelled ligand and an injection of the cold ligand (displacement experiment) has been investigated. However, this second protocol led to two very different numerical solutions and it was necessary to demonstrate which solution was biologically valid. This has been possible by using a third protocol including both a displacement and a co-injection experiment. (authors). 16 refs.; 14 figs

  13. X-Parameter Based Modelling of Polar Modulated Power Amplifiers

    DEFF Research Database (Denmark)

    Wang, Yelin; Nielsen, Troels Studsgaard; Sira, Daniel

    2013-01-01

    X-parameters are developed as an extension of S-parameters capable of modelling non-linear devices driven by large signals. They are suitable for devices having only radio frequency (RF) and DC ports. In a polar power amplifier (PA), phase and envelope of the input modulated signal are applied...... at separate ports and the envelope port is neither an RF nor a DC port. As a result, X-parameters may fail to characterise the effect of the envelope port excitation and consequently the polar PA. This study introduces a solution to the problem for a commercial polar PA. In this solution, the RF-phase path...... PA for simulations. The simulated error vector magnitude (EVM) and adjacent channel power ratio (ACPR) were compared with the measured data to validate the model. The maximum differences between the simulated and measured EVM and ACPR are less than 2% point and 3 dB, respectively....

  14. Joint Dynamics Modeling and Parameter Identification for Space Robot Applications

    Directory of Open Access Journals (Sweden)

    Adenilson R. da Silva

    2007-01-01

    Full Text Available Long-term mission identification and model validation for in-flight manipulator control system in almost zero gravity with hostile space environment are extremely important for robotic applications. In this paper, a robot joint mathematical model is developed where several nonlinearities have been taken into account. In order to identify all the required system parameters, an integrated identification strategy is derived. This strategy makes use of a robust version of least-squares procedure (LS for getting the initial conditions and a general nonlinear optimization method (MCS—multilevel coordinate search—algorithm to estimate the nonlinear parameters. The approach is applied to the intelligent robot joint (IRJ experiment that was developed at DLR for utilization opportunity on the International Space Station (ISS. The results using real and simulated measurements have shown that the developed algorithm and strategy have remarkable features in identifying all the parameters with good accuracy.

  15. Economic analysis of coal price. Electricity price adjustment in China based on the CGE model

    International Nuclear Information System (INIS)

    He, Y.X.; Yang, L.Y.; Wang, Y.J.; Wang, J.; Zhang, S.L.

    2010-01-01

    In recent years, coal price has risen rapidly, which has also brought a sharp increase in the expenditures of thermal power plants in China. Meantime, the power production price and power retail price have not been adjusted accordingly and a large number of thermal power plants have incurred losses. The power industry is a key industry in the national economy. As such, a thorough analysis and evaluation of the economic influence of the electricity price should be conducted before electricity price adjustment is carried out. This paper analyses the influence of coal price adjustment on the electric power industry, and the influence of electricity price adjustment on the macroeconomy in China based on computable general equilibrium models. The conclusions are as follows: (1) a coal price increase causes a rise in the cost of the electric power industry, but the influence gradually descends with increase in coal price; and (2) an electricity price increase has an adverse influence on the total output, Gross Domestic Product (GDP), and the Consumer Price Index (CPI). Electricity price increases have a contractionary effect on economic development and, consequently, electricity price policy making must consequently consider all factors to minimize their adverse influence. (author)

  16. Prediction of interest rate using CKLS model with stochastic parameters

    Energy Technology Data Exchange (ETDEWEB)

    Ying, Khor Chia [Faculty of Computing and Informatics, Multimedia University, Jalan Multimedia, 63100 Cyberjaya, Selangor (Malaysia); Hin, Pooi Ah [Sunway University Business School, No. 5, Jalan Universiti, Bandar Sunway, 47500 Subang Jaya, Selangor (Malaysia)

    2014-06-19

    The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ{sup (j)} of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ{sup (j)}, we assume that φ{sup (j)} depends on φ{sup (j−m)}, φ{sup (j−m+1)},…, φ{sup (j−1)} and the interest rate r{sub j+n} at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r{sub j+n+1} of the interest rate at the next time point when the value r{sub j+n} of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r{sub j+n+d} at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.

  17. Model parameters estimation and sensitivity by genetic algorithms

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca

    2003-01-01

    In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The

  18. Conceptual Model for Simulating the Adjustments of Bankfull Characteristics in the Lower Yellow River, China

    Directory of Open Access Journals (Sweden)

    Yuanjian Wang

    2014-01-01

    Full Text Available We present a conceptual model for simulating the temporal adjustments in the banks of the Lower Yellow River (LYR. Basic conservation equations for mass, friction, and sediment transport capacity and the Exner equation were adopted to simulate the hydrodynamics underlying fluvial processes. The relationship between changing rates in bankfull width and depth, derived from quasiuniversal hydraulic geometries, was used as a closure for the hydrodynamic equations. On inputting the daily flow discharge and sediment load, the conceptual model successfully simulated the 30-year adjustments in the bankfull geometries of typical reaches of the LYR. The square of the correlating coefficient reached 0.74 for Huayuankou Station in the multiple-thread reach and exceeded 0.90 for Lijin Station in the meandering reach. This proposed model allows multiple dependent variables and the input of daily hydrological data for long-term simulations. This links the hydrodynamic and geomorphic processes in a fluvial river and has potential applicability to fluvial rivers undergoing significant adjustments.

  19. Estimation of group means when adjusting for covariates in generalized linear models.

    Science.gov (United States)

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  20. Parameter sensitivity analysis of a lumped-parameter model of a chain of lymphangions in series.

    Science.gov (United States)

    Jamalian, Samira; Bertram, Christopher D; Richardson, William J; Moore, James E

    2013-12-01

    Any disruption of the lymphatic system due to trauma or injury can lead to edema. There is no effective cure for lymphedema, partly because predictive knowledge of lymphatic system reactions to interventions is lacking. A well-developed model of the system could greatly improve our understanding of its function. Lymphangions, defined as the vessel segment between two valves, are the individual pumping units. Based on our previous lumped-parameter model of a chain of lymphangions, this study aimed to identify the parameters that affect the system output the most using a sensitivity analysis. The system was highly sensitive to minimum valve resistance, such that variations in this parameter caused an order-of-magnitude change in time-average flow rate for certain values of imposed pressure difference. Average flow rate doubled when contraction frequency was increased within its physiological range. Optimum lymphangion length was found to be some 13-14.5 diameters. A peak of time-average flow rate occurred when transmural pressure was such that the pressure-diameter loop for active contractions was centered near maximum passive vessel compliance. Increasing the number of lymphangions in the chain improved the pumping in the presence of larger adverse pressure differences. For a given pressure difference, the optimal number of lymphangions increased with the total vessel length. These results indicate that further experiments to estimate valve resistance more accurately are necessary. The existence of an optimal value of transmural pressure may provide additional guidelines for increasing pumping in areas affected by edema.

  1. Investigation of land use effects on Nash model parameters

    Science.gov (United States)

    Niazi, Faegheh; Fakheri Fard, Ahmad; Nourani, Vahid; Goodrich, David; Gupta, Hoshin

    2015-04-01

    Flood forecasting is of great importance in hydrologic planning, hydraulic structure design, water resources management and sustainable designs like flood control and management. Nash's instantaneous unit hydrograph is frequently used for simulating hydrological response in natural watersheds. Urban hydrology is gaining more attention due to population increases and associated construction escalation. Rapid development of urban areas affects the hydrologic processes of watersheds by decreasing soil permeability, flood base flow, lag time and increase in flood volume, peak runoff rates and flood frequency. In this study the influence of urbanization on the significant parameters of the Nash model have been investigated. These parameters were calculated using three popular methods (i.e. moment, root mean square error and random sampling data generation), in a small watershed consisting of one natural sub-watershed which drains into a residentially developed sub-watershed in the city of Sierra Vista, Arizona. The results indicated that for all three methods, the lag time, which is product of Nash parameters "K" and "n", in the natural sub-watershed is greater than the developed one. This logically implies more storage and/or attenuation in the natural sub-watershed. The median K and n parameters derived from the three methods using calibration events were tested via a set of verification events. The results indicated that all the three method have acceptable accuracy in hydrograph simulation. The CDF curves and histograms of the parameters clearly show the difference of the Nash parameter values between the natural and developed sub-watersheds. Some specific upper and lower percentile values of the median of the generated parameters (i.e. 10, 20 and 30 %) were analyzed to future investigates the derived parameters. The model was sensitive to variations in the value of the uncertain K and n parameter. Changes in n are smaller than K in both sub-watersheds indicating

  2. Revised models and genetic parameter estimates for production and ...

    African Journals Online (AJOL)

    Genetic parameters for production and reproduction traits in the Elsenburg Dormer sheep stud were estimated using records of 11743 lambs born between 1943 and 2002. An animal model with direct and maternal additive, maternal permanent and temporary environmental effects was fitted for traits considered traits of the ...

  3. Transformations among CE–CVM model parameters for ...

    Indian Academy of Sciences (India)

    In the development of thermodynamic databases for multicomponent systems using the cluster expansion–cluster variation methods, we need to have a consistent procedure for expressing the model parameters (CECs) of a higher order system in terms of those of the lower order subsystems and to an independent set of ...

  4. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  5. Comparison of parameter estimation algorithms in hydrological modelling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2006-01-01

    for these types of models, although at a more expensive computational cost. The main purpose of this study is to investigate the performance of a global and a local parameter optimization algorithm, respectively, the Shuffled Complex Evolution (SCE) algorithm and the gradient-based Gauss...

  6. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  7. Constraint on Parameters of Inverse Compton Scattering Model for ...

    Indian Academy of Sciences (India)

    J. Astrophys. Astr. (2011) 32, 299–300 c Indian Academy of Sciences. Constraint on Parameters of Inverse Compton Scattering Model for PSR B2319+60. H. G. Wang. ∗. & M. Lv. Center for Astrophysics,Guangzhou University, Guangzhou, China. ∗ e-mail: cosmic008@yahoo.com.cn. Abstract. Using the multifrequency radio ...

  8. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. A. Wasiolek

    2003-01-01

    This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air inhaled by a receptor. Concentrations in air to which the

  9. Integrating microbial diversity in soil carbon dynamic models parameters

    Science.gov (United States)

    Louis, Benjamin; Menasseri-Aubry, Safya; Leterme, Philippe; Maron, Pierre-Alain; Viaud, Valérie

    2015-04-01

    Faced with the numerous concerns about soil carbon dynamic, a large quantity of carbon dynamic models has been developed during the last century. These models are mainly in the form of deterministic compartment models with carbon fluxes between compartments represented by ordinary differential equations. Nowadays, lots of them consider the microbial biomass as a compartment of the soil organic matter (carbon quantity). But the amount of microbial carbon is rarely used in the differential equations of the models as a limiting factor. Additionally, microbial diversity and community composition are mostly missing, although last advances in soil microbial analytical methods during the two past decades have shown that these characteristics play also a significant role in soil carbon dynamic. As soil microorganisms are essential drivers of soil carbon dynamic, the question about explicitly integrating their role have become a key issue in soil carbon dynamic models development. Some interesting attempts can be found and are dominated by the incorporation of several compartments of different groups of microbial biomass in terms of functional traits and/or biogeochemical compositions to integrate microbial diversity. However, these models are basically heuristic models in the sense that they are used to test hypotheses through simulations. They have rarely been confronted to real data and thus cannot be used to predict realistic situations. The objective of this work was to empirically integrate microbial diversity in a simple model of carbon dynamic through statistical modelling of the model parameters. This work is based on available experimental results coming from a French National Research Agency program called DIMIMOS. Briefly, 13C-labelled wheat residue has been incorporated into soils with different pedological characteristics and land use history. Then, the soils have been incubated during 104 days and labelled and non-labelled CO2 fluxes have been measured at ten

  10. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-09-24

    This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air

  11. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rasmuson; K. Rautenstrauch

    2004-09-14

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.

  12. Estimating model parameters in nonautonomous chaotic systems using synchronization

    International Nuclear Information System (INIS)

    Yang, Xiaoli; Xu, Wei; Sun, Zhongkui

    2007-01-01

    In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation

  13. Soil-Related Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Smith, A. J.

    2004-01-01

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure was defined as AP-SIII.9Q, ''Scientific Analyses''. This

  14. Soil-Related Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    A. J. Smith

    2004-09-09

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure

  15. Space geodetic techniques for global modeling of ionospheric peak parameters

    Science.gov (United States)

    Alizadeh, M. Mahdi; Schuh, Harald; Schmidt, Michael

    The rapid development of new technological systems for navigation, telecommunication, and space missions which transmit signals through the Earth’s upper atmosphere - the ionosphere - makes the necessity of precise, reliable and near real-time models of the ionospheric parameters more crucial. In the last decades space geodetic techniques have turned into a capable tool for measuring ionospheric parameters in terms of Total Electron Content (TEC) or the electron density. Among these systems, the current space geodetic techniques, such as Global Navigation Satellite Systems (GNSS), Low Earth Orbiting (LEO) satellites, satellite altimetry missions, and others have found several applications in a broad range of commercial and scientific fields. This paper aims at the development of a three-dimensional integrated model of the ionosphere, by using various space geodetic techniques and applying a combination procedure for computation of the global model of electron density. In order to model ionosphere in 3D, electron density is represented as a function of maximum electron density (NmF2), and its corresponding height (hmF2). NmF2 and hmF2 are then modeled in longitude, latitude, and height using two sets of spherical harmonic expansions with degree and order 15. To perform the estimation, GNSS input data are simulated in such a way that the true position of the satellites are detected and used, but the STEC values are obtained through a simulation procedure, using the IGS VTEC maps. After simulating the input data, the a priori values required for the estimation procedure are calculated using the IRI-2012 model and also by applying the ray-tracing technique. The estimated results are compared with F2-peak parameters derived from the IRI model to assess the least-square estimation procedure and moreover, to validate the developed maps, the results are compared with the raw F2-peak parameters derived from the Formosat-3/Cosmic data.

  16. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    Science.gov (United States)

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  17. Rational Multi-Curve Models with Counterparty-Risk Valuation Adjustments

    DEFF Research Database (Denmark)

    Crepey, Stephane; Macrina, Andrea; Nguyen, Tuyet Mai

    2016-01-01

    We develop a multi-curve term structure setup in which the modelling ingredients are expressed by rational functionals of Markov processes. We calibrate to LIBOR swaptions data and show that a rational two-factor lognormal multi-curve model is sufficient to match market data with accuracy. We elu...... obligations. In order to compute counterparty-risk valuation adjustments, such as CVA, we show how positive default intensity processes with rational form can be derived. We flesh out our study by applying the results to a basis swap contract....

  18. Mass balance model parameter transferability on a tropical glacier

    Science.gov (United States)

    Gurgiser, Wolfgang; Mölg, Thomas; Nicholson, Lindsey; Kaser, Georg

    2013-04-01

    The mass balance and melt water production of glaciers is of particular interest in the Peruvian Andes where glacier melt water has markedly increased water supply during the pronounced dry seasons in recent decades. However, the melt water contribution from glaciers is projected to decrease with appreciable negative impacts on the local society within the coming decades. Understanding mass balance processes on tropical glaciers is a prerequisite for modeling present and future glacier runoff. As a first step towards this aim we applied a process-based surface mass balance model in order to calculate observed ablation at two stakes in the ablation zone of Shallap Glacier (4800 m a.s.l., 9°S) in the Cordillera Blanca, Peru. Under the tropical climate, the snow line migrates very frequently across most of the ablation zone all year round causing large temporal and spatial variations of glacier surface conditions and related ablation. Consequently, pronounced differences between the two chosen stakes and the two years were observed. Hourly records of temperature, humidity, wind speed, short wave incoming radiation, and precipitation are available from an automatic weather station (AWS) on the moraine near the glacier for the hydrological years 2006/07 and 2007/08 while stake readings are available at intervals of between 14 to 64 days. To optimize model parameters, we used 1000 model simulations in which the most sensitive model parameters were varied randomly within their physically meaningful ranges. The modeled surface height change was evaluated against the two stake locations in the lower ablation zone (SH11, 4760m) and in the upper ablation zone (SH22, 4816m), respectively. The optimal parameter set for each point achieved good model skill but if we transfer the best parameter combination from one stake site to the other stake site model errors increases significantly. The same happens if we optimize the model parameters for each year individually and transfer

  19. Investigation of RADTRAN Stop Model input parameters for truck stops

    International Nuclear Information System (INIS)

    Griego, N.R.; Smith, J.D.; Neuhauser, K.S.

    1996-01-01

    RADTRAN is a computer code for estimating the risks and consequences as transport of radioactive materials (RAM). RADTRAN was developed and is maintained by Sandia National Laboratories for the US Department of Energy (DOE). For incident-free transportation, the dose to persons exposed while the shipment is stopped is frequently a major percentage of the overall dose. This dose is referred to as Stop Dose and is calculated by the Stop Model. Because stop dose is a significant portion of the overall dose associated with RAM transport, the values used as input for the Stop Model are important. Therefore, an investigation of typical values for RADTRAN Stop Parameters for truck stops was performed. The resulting data from these investigations were analyzed to provide mean values, standard deviations, and histograms. Hence, the mean values can be used when an analyst does not have a basis for selecting other input values for the Stop Model. In addition, the histograms and their characteristics can be used to guide statistical sampling techniques to measure sensitivity of the RADTRAN calculated Stop Dose to the uncertainties in the stop model input parameters. This paper discusses the details and presents the results of the investigation of stop model input parameters at truck stops

  20. Four-parameter analytical local model potential for atoms

    International Nuclear Information System (INIS)

    Fei, Yu; Jiu-Xun, Sun; Rong-Gang, Tian; Wei, Yang

    2009-01-01

    Analytical local model potential for modeling the interaction in an atom reduces the computational effort in electronic structure calculations significantly. A new four-parameter analytical local model potential is proposed for atoms Li through Lr, and the values of four parameters are shell-independent and obtained by fitting the results of X a method. At the same time, the energy eigenvalues, the radial wave functions and the total energies of electrons are obtained by solving the radial Schrödinger equation with a new form of potential function by Numerov's numerical method. The results show that our new form of potential function is suitable for high, medium and low Z atoms. A comparison among the new potential function and other analytical potential functions shows the greater flexibility and greater accuracy of the present new potential function. (atomic and molecular physics)

  1. Characterizing and Addressing the Need for Statistical Adjustment of Global Climate Model Data

    Science.gov (United States)

    White, K. D.; Baker, B.; Mueller, C.; Villarini, G.; Foley, P.; Friedman, D.

    2017-12-01

    As part of its mission to research and measure the effects of the changing climate, the U. S. Army Corps of Engineers (USACE) regularly uses the World Climate Research Programme's Coupled Model Intercomparison Project Phase 5 (CMIP5) multi-model dataset. However, these data are generated at a global level and are not fine-tuned for specific watersheds. This often causes CMIP5 output to vary from locally observed patterns in the climate. Several downscaling methods have been developed to increase the resolution of the CMIP5 data and decrease systemic differences to support decision-makers as they evaluate results at the watershed scale. Evaluating preliminary comparisons of observed and projected flow frequency curves over the US revealed a simple framework for water resources decision makers to plan and design water resources management measures under changing conditions using standard tools. Using this framework as a basis, USACE has begun to explore to use of statistical adjustment to alter global climate model data to better match the locally observed patterns while preserving the general structure and behavior of the model data. When paired with careful measurement and hypothesis testing, statistical adjustment can be particularly effective at navigating the compromise between the locally observed patterns and the global climate model structures for decision makers.

  2. Remote Sensing-based Methodologies for Snow Model Adjustments in Operational Streamflow Prediction

    Science.gov (United States)

    Bender, S.; Miller, W. P.; Bernard, B.; Stokes, M.; Oaida, C. M.; Painter, T. H.

    2015-12-01

    Water management agencies rely on hydrologic forecasts issued by operational agencies such as NOAA's Colorado Basin River Forecast Center (CBRFC). The CBRFC has partnered with the Jet Propulsion Laboratory (JPL) under funding from NASA to incorporate research-oriented, remotely-sensed snow data into CBRFC operations and to improve the accuracy of CBRFC forecasts. The partnership has yielded valuable analysis of snow surface albedo as represented in JPL's MODIS Dust Radiative Forcing in Snow (MODDRFS) data, across the CBRFC's area of responsibility. When dust layers within a snowpack emerge, reducing the snow surface albedo, the snowmelt rate may accelerate. The CBRFC operational snow model (SNOW17) is a temperature-index model that lacks explicit representation of snowpack surface albedo. CBRFC forecasters monitor MODDRFS data for emerging dust layers and may manually adjust SNOW17 melt rates. A technique was needed for efficient and objective incorporation of the MODDRFS data into SNOW17. Initial development focused in Colorado, where dust-on-snow events frequently occur. CBRFC forecasters used retrospective JPL-CBRFC analysis and developed a quantitative relationship between MODDRFS data and mean areal temperature (MAT) data. The relationship was used to generate adjusted, MODDRFS-informed input for SNOW17. Impacts of the MODDRFS-SNOW17 MAT adjustment method on snowmelt-driven streamflow prediction varied spatially and with characteristics of the dust deposition events. The largest improvements occurred in southwestern Colorado, in years with intense dust deposition events. Application of the method in other regions of Colorado and in "low dust" years resulted in minimal impact. The MODDRFS-SNOW17 MAT technique will be implemented in CBRFC operations in late 2015, prior to spring 2016 runoff. Collaborative investigation of remote sensing-based adjustment methods for the CBRFC operational hydrologic forecasting environment will continue over the next several years.

  3. Modeling extreme events: Sample fraction adaptive choice in parameter estimation

    Science.gov (United States)

    Neves, Manuela; Gomes, Ivette; Figueiredo, Fernanda; Gomes, Dora Prata

    2012-09-01

    When modeling extreme events there are a few primordial parameters, among which we refer the extreme value index and the extremal index. The extreme value index measures the right tail-weight of the underlying distribution and the extremal index characterizes the degree of local dependence in the extremes of a stationary sequence. Most of the semi-parametric estimators of these parameters show the same type of behaviour: nice asymptotic properties, but a high variance for small values of k, the number of upper order statistics to be used in the estimation, and a high bias for large values of k. This shows a real need for the choice of k. Choosing some well-known estimators of those parameters we revisit the application of a heuristic algorithm for the adaptive choice of k. The procedure is applied to some simulated samples as well as to some real data sets.

  4. Robust linear parameter varying induction motor control with polytopic models

    Directory of Open Access Journals (Sweden)

    Dalila Khamari

    2013-01-01

    Full Text Available This paper deals with a robust controller for an induction motor which is represented as a linear parameter varying systems. To do so linear matrix inequality (LMI based approach and robust Lyapunov feedback controller are associated. This new approach is related to the fact that the synthesis of a linear parameter varying (LPV feedback controller for the inner loop take into account rotor resistance and mechanical speed as varying parameter. An LPV flux observer is also synthesized to estimate rotor flux providing reference to cited above regulator. The induction motor is described as a polytopic model because of speed and rotor resistance affine dependence their values can be estimated on line during systems operations. The simulation results are presented to confirm the effectiveness of the proposed approach where robustness stability and high performances have been achieved over the entire operating range of the induction motor.

  5. Model parameter learning using Kullback-Leibler divergence

    Science.gov (United States)

    Lin, Chungwei; Marks, Tim K.; Pajovic, Milutin; Watanabe, Shinji; Tung, Chih-kuan

    2018-02-01

    In this paper, we address the following problem: For a given set of spin configurations whose probability distribution is of the Boltzmann type, how do we determine the model coupling parameters? We demonstrate that directly minimizing the Kullback-Leibler divergence is an efficient method. We test this method against the Ising and XY models on the one-dimensional (1D) and two-dimensional (2D) lattices, and provide two estimators to quantify the model quality. We apply this method to two types of problems. First, we apply it to the real-space renormalization group (RG). We find that the obtained RG flow is sufficiently good for determining the phase boundary (within 1% of the exact result) and the critical point, but not accurate enough for critical exponents. The proposed method provides a simple way to numerically estimate amplitudes of the interactions typically truncated in the real-space RG procedure. Second, we apply this method to the dynamical system composed of self-propelled particles, where we extract the parameter of a statistical model (a generalized XY model) from a dynamical system described by the Viscek model. We are able to obtain reasonable coupling values corresponding to different noise strengths of the Viscek model. Our method is thus able to provide quantitative analysis of dynamical systems composed of self-propelled particles.

  6. Biosphere modelling for a HLW repository - scenario and parameter variations

    International Nuclear Information System (INIS)

    Grogan, H.

    1985-03-01

    In Switzerland high-level radioactive wastes have been considered for disposal in deep-lying crystalline formations. The individual doses to man resulting from radionuclides entering the biosphere via groundwater transport are calculated. The main recipient area modelled, which constitutes the base case, is a broad gravel terrace sited along the south bank of the river Rhine. An alternative recipient region, a small valley with a well, is also modelled. A number of parameter variations are performed in order to ascertain their impact on the doses. Finally two scenario changes are modelled somewhat simplistically, these consider different prevailing climates, namely tundra and a warmer climate than present. In the base case negligibly low doses to man in the long term, resulting from the existence of a HLW repository have been calculated. Cs-135 results in the largest dose (8.4E-7 mrem/y at 6.1E+6 y) while Np-237 gives the largest dose from the actinides (3.6E-8 mrem/y). The response of the model to parameter variations cannot be easily predicted due to non-linear coupling of many of the parameters. However, the calculated doses were negligibly low in all cases as were those resulting from the two scenario variations. (author)

  7. Thermal Model Parameter Identification of a Lithium Battery

    Directory of Open Access Journals (Sweden)

    Dirk Nissing

    2017-01-01

    Full Text Available The temperature of a Lithium battery cell is important for its performance, efficiency, safety, and capacity and is influenced by the environmental temperature and by the charging and discharging process itself. Battery Management Systems (BMS take into account this effect. As the temperature at the battery cell is difficult to measure, often the temperature is measured on or nearby the poles of the cell, although the accuracy of predicting the cell temperature with those quantities is limited. Therefore a thermal model of the battery is used in order to calculate and estimate the cell temperature. This paper uses a simple RC-network representation for the thermal model and shows how the thermal parameters are identified using input/output measurements only, where the load current of the battery represents the input while the temperatures at the poles represent the outputs of the measurement. With a single measurement the eight model parameters (thermal resistances, electric contact resistances, and heat capacities can be determined using the method of least-square. Experimental results show that the simple model with the identified parameters fits very accurately to the measurements.

  8. Influence of Population Variation of Physiological Parameters in Computational Models of Space Physiology

    Science.gov (United States)

    Myers, J. G.; Feola, A.; Werner, C.; Nelson, E. S.; Raykin, J.; Samuels, B.; Ethier, C. R.

    2016-01-01

    The earliest manifestations of Visual Impairment and Intracranial Pressure (VIIP) syndrome become evident after months of spaceflight and include a variety of ophthalmic changes, including posterior globe flattening and distension of the optic nerve sheath. Prevailing evidence links the occurrence of VIIP to the cephalic fluid shift induced by microgravity and the subsequent pressure changes around the optic nerve and eye. Deducing the etiology of VIIP is challenging due to the wide range of physiological parameters that may be influenced by spaceflight and are required to address a realistic spectrum of physiological responses. Here, we report on the application of an efficient approach to interrogating physiological parameter space through computational modeling. Specifically, we assess the influence of uncertainty in input parameters for two models of VIIP syndrome: a lumped-parameter model (LPM) of the cardiovascular and central nervous systems, and a finite-element model (FEM) of the posterior eye, optic nerve head (ONH) and optic nerve sheath. Methods: To investigate the parameter space in each model, we employed Latin hypercube sampling partial rank correlation coefficient (LHSPRCC) strategies. LHS techniques outperform Monte Carlo approaches by enforcing efficient sampling across the entire range of all parameters. The PRCC method estimates the sensitivity of model outputs to these parameters while adjusting for the linear effects of all other inputs. The LPM analysis addressed uncertainties in 42 physiological parameters, such as initial compartmental volume and nominal compartment percentage of total cardiac output in the supine state, while the FEM evaluated the effects on biomechanical strain from uncertainties in 23 material and pressure parameters for the ocular anatomy. Results and Conclusion: The LPM analysis identified several key factors including high sensitivity to the initial fluid distribution. The FEM study found that intraocular pressure and

  9. Family support and acceptance, gay male identity formation, and psychological adjustment: a path model.

    Science.gov (United States)

    Elizur, Y; Ziv, M

    2001-01-01

    While heterosexist family undermining has been demonstrated to be a developmental risk factor in the life of persons with same-gender orientation, the issue of protective family factors is both controversial and relatively neglected. In this study of Israeli gay males (N = 114), we focused on the interrelations of family support, family acceptance and family knowledge of gay orientation, and gay male identity formation, and their effects on mental health and self-esteem. A path model was proposed based on the hypotheses that family support, family acceptance, family knowledge, and gay identity formation have an impact on psychological adjustment, and that family support has an effect on gay identity formation that is mediated by family acceptance. The assessment of gay identity formation was based on an established stage model that was streamlined for cross-cultural practice by defining three basic processes of same-gender identity formation: self-definition, self-acceptance, and disclosure (Elizur & Mintzer, 2001). The testing of our conceptual path model demonstrated an excellent fit with the data. An alternative model that hypothesized effects of gay male identity on family acceptance and family knowledge did not fit the data. Interpreting these results, we propose that the main effect of family support/acceptance on gay identity is related to the process of disclosure, and that both general family support and family acceptance of same-gender orientation play a significant role in the psychological adjustment of gay men.

  10. A Generic Model for Relative Adjustment Between Optical Sensors Using Rigorous Orbit Mechanics

    Directory of Open Access Journals (Sweden)

    B. Islam

    2008-06-01

    Full Text Available The classical calibration or space resection is the fundamental task in photogrammetry. The lack of sufficient knowledge of interior and exterior orientation parameters lead to unreliable results in the photogrammetric process. One of the earliest in approaches using in photogrammetry was the plumb line calibration method. This method is suitable to recover the radial and decentering lens distortion coefficients, while the remaining interior(focal length and principal point coordinates and exterior orientation parameters have to be determined by a complimentary method. As the lens distortion remains very less it not considered as the interior orientation parameters, in the present rigorous sensor model. There are several other available methods based on the photogrammetric collinearity equations, which consider the determination of exterior orientation parameters, with no mention to the simultaneous determination of inner orientation parameters. Normal space resection methods solve the problem using control points, whose coordinates are known both in image and object reference systems. The non-linearity of the model and the problems, in point location in digital images and identifying the maximum GPS measured control points are the main drawbacks of the classical approaches. This paper addresses mathematical model based on the fundamental assumption of collineariy of three points of two Along-Track Stereo imagery sensors and independent object point. Assuming this condition it is possible to extract the exterior orientation (EO parameters for a long strip and single image together, without and with using the control points. Moreover, after extracting the EO parameters the accuracy for satellite data products are compared in with using single and with no control points.

  11. Contaminant transport in aquifers: improving the determination of model parameters

    International Nuclear Information System (INIS)

    Sabino, C.V.S.; Moreira, R.M.; Lula, Z.L.; Chausson, Y.; Magalhaes, W.F.; Vianna, M.N.

    1998-01-01

    Parameters conditioning the migration behavior of cesium and mercury are measured with their tracers 137 Cs and 203 Hg in the laboratory, using both batch and column experiments. Batch tests were used to define the sorption isotherm characteristics. Also investigated were the influences of some test parameters, in particular those due to the volume of water to mass of soil ratio (V/m). A provisional relationship between V/m and the distribution coefficient, K d , has been advanced, and a procedure to estimate K d 's valid for environmental values of the ratio V/m has been suggested. Column tests provided the parameters for a transport model. A major problem to be dealt with in such tests is the collimation of the radioactivity probe. Besides mechanically optimizing the collimator, a deconvolution procedure has been suggested and tested, with statistical criteria, to filter off both noise and spurious tracer signals. Correction procedures for the integrating effect introduced by sampling at the exit of columns have also been developed. These techniques may be helpful in increasing the accuracy required in the measurement of parameters conditioning contaminant migration in soils, thus allowing more reliable predictions based on mathematical model applications. (author)

  12. HOM study and parameter calculation of the TESLA cavity model

    CERN Document Server

    Zeng, Ri-Hua; Gerigk Frank; Wang Guang-Wei; Wegner Rolf; Liu Rong; Schuh Marcel

    2010-01-01

    The Superconducting Proton Linac (SPL) is the project for a superconducting, high current H-accelerator at CERN. To find dangerous higher order modes (HOMs) in the SPL superconducting cavities, simulation and analysis for the cavity model using simulation tools are necessary. The. existing TESLA 9-cell cavity geometry data have been used for the initial construction of the models in HFSS. Monopole, dipole and quadrupole modes have been obtained by applying different symmetry boundaries on various cavity models. In calculation, scripting language in HFSS was used to create scripts to automatically calculate the parameters of modes in these cavity models (these scripts are also available in other cavities with different cell numbers and geometric structures). The results calculated automatically are then compared with the values given in the TESLA paper. The optimized cavity model with the minimum error will be taken as the base for further simulation of the SPL cavities.

  13. The definition of input parameters for modelling of energetic subsystems

    Directory of Open Access Journals (Sweden)

    Ptacek M.

    2013-06-01

    Full Text Available This paper is a short review and a basic description of mathematical models of renewable energy sources which present individual investigated subsystems of a system created in Matlab/Simulink. It solves the physical and mathematical relationships of photovoltaic and wind energy sources that are often connected to the distribution networks. The fuel cell technology is much less connected to the distribution networks but it could be promising in the near future. Therefore, the paper informs about a new dynamic model of the low-temperature fuel cell subsystem, and the main input parameters are defined as well. Finally, the main evaluated and achieved graphic results for the suggested parameters and for all the individual subsystems mentioned above are shown.

  14. The definition of input parameters for modelling of energetic subsystems

    Science.gov (United States)

    Ptacek, M.

    2013-06-01

    This paper is a short review and a basic description of mathematical models of renewable energy sources which present individual investigated subsystems of a system created in Matlab/Simulink. It solves the physical and mathematical relationships of photovoltaic and wind energy sources that are often connected to the distribution networks. The fuel cell technology is much less connected to the distribution networks but it could be promising in the near future. Therefore, the paper informs about a new dynamic model of the low-temperature fuel cell subsystem, and the main input parameters are defined as well. Finally, the main evaluated and achieved graphic results for the suggested parameters and for all the individual subsystems mentioned above are shown.

  15. Propagation channel characterization, parameter estimation, and modeling for wireless communications

    CERN Document Server

    Yin, Xuefeng

    2016-01-01

    Thoroughly covering channel characteristics and parameters, this book provides the knowledge needed to design various wireless systems, such as cellular communication systems, RFID and ad hoc wireless communication systems. It gives a detailed introduction to aspects of channels before presenting the novel estimation and modelling techniques which can be used to achieve accurate models. To systematically guide readers through the topic, the book is organised in three distinct parts. The first part covers the fundamentals of the characterization of propagation channels, including the conventional single-input single-output (SISO) propagation channel characterization as well as its extension to multiple-input multiple-output (MIMO) cases. Part two focuses on channel measurements and channel data post-processing. Wideband channel measurements are introduced, including the equipment, technology and advantages and disadvantages of different data acquisition schemes. The channel parameter estimation methods are ...

  16. Empirical flow parameters : a tool for hydraulic model validity

    Science.gov (United States)

    Asquith, William H.; Burley, Thomas E.; Cleveland, Theodore G.

    2013-01-01

    The objectives of this project were (1) To determine and present from existing data in Texas, relations between observed stream flow, topographic slope, mean section velocity, and other hydraulic factors, to produce charts such as Figure 1 and to produce empirical distributions of the various flow parameters to provide a methodology to "check if model results are way off!"; (2) To produce a statistical regional tool to estimate mean velocity or other selected parameters for storm flows or other conditional discharges at ungauged locations (most bridge crossings) in Texas to provide a secondary way to compare such values to a conventional hydraulic modeling approach. (3.) To present ancillary values such as Froude number, stream power, Rosgen channel classification, sinuosity, and other selected characteristics (readily determinable from existing data) to provide additional information to engineers concerned with the hydraulic-soil-foundation component of transportation infrastructure.

  17. Lumped-parameter Model of a Bucket Foundation

    DEFF Research Database (Denmark)

    Andersen, Lars; Ibsen, Lars Bo; Liingaard, Morten

    2009-01-01

    As an alternative to gravity footings or pile foundations, offshore wind turbines at shallow water can be placed on a bucket foundation. The present analysis concerns the development of consistent lumped-parameter models for this type of foundation. The aim is to formulate a computationally effic...... be disregarded without significant loss of accuracy. Finally, special attention is drawn to the influence of the skirt stiffness, i.e. whether the embedded part of the caisson is rigid or flexible....

  18. Modeling Water Quality Parameters Using Data-driven Methods

    Directory of Open Access Journals (Sweden)

    Shima Soleimani

    2017-02-01

    Full Text Available Introduction: Surface water bodies are the most easily available water resources. Increase use and waste water withdrawal of surface water causes drastic changes in surface water quality. Water quality, importance as the most vulnerable and important water supply resources is absolutely clear. Unfortunately, in the recent years because of city population increase, economical improvement, and industrial product increase, entry of pollutants to water bodies has been increased. According to that water quality parameters express physical, chemical, and biological water features. So the importance of water quality monitoring is necessary more than before. Each of various uses of water, such as agriculture, drinking, industry, and aquaculture needs the water with a special quality. In the other hand, the exact estimation of concentration of water quality parameter is significant. Material and Methods: In this research, first two input variable models as selection methods (namely, correlation coefficient and principal component analysis were applied to select the model inputs. Data processing is consisting of three steps, (1 data considering, (2 identification of input data which have efficient on output data, and (3 selecting the training and testing data. Genetic Algorithm-Least Square Support Vector Regression (GA-LSSVR algorithm were developed to model the water quality parameters. In the LSSVR method is assumed that the relationship between input and output variables is nonlinear, but by using a nonlinear mapping relation can create a space which is named feature space in which relationship between input and output variables is defined linear. The developed algorithm is able to gain maximize the accuracy of the LSSVR method with auto LSSVR parameters. Genetic algorithm (GA is one of evolutionary algorithm which automatically can find the optimum coefficient of Least Square Support Vector Regression (LSSVR. The GA-LSSVR algorithm was employed to

  19. A procedure for determining parameters of a simplified ligament model.

    Science.gov (United States)

    Barrett, Jeff M; Callaghan, Jack P

    2018-01-03

    A previous mathematical model of ligament force-generation treated their behavior as a population of collagen fibres arranged in parallel. When damage was ignored in this model, an expression for ligament force in terms of the deflection, x, effective stiffness, k, mean collagen slack length, μ, and the standard deviation of slack lengths, σ, was obtained. We present a simple three-step method for determining the three model parameters (k, μ, and σ) from force-deflection data: (1) determine the equation of the line in the linear region of this curve, its slope is k and its x -intercept is -μ; (2) interpolate the force-deflection data when x is -μ to obtain F 0 ; (3) calculate σ with the equation σ=2πF 0 /k. Results from this method were in good agreement to those obtained from a least-squares procedure on experimental data - all falling within 6%. Therefore, parameters obtained using the proposed method provide a systematic way of reporting ligament parameters, or for obtaining an initial guess for nonlinear least-squares. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Modelling spatial-temporal and coordinative parameters in swimming.

    Science.gov (United States)

    Seifert, L; Chollet, D

    2009-07-01

    This study modelled the changes in spatial-temporal and coordinative parameters through race paces in the four swimming strokes. The arm and leg phases in simultaneous strokes (butterfly and breaststroke) and the inter-arm phases in alternating strokes (crawl and backstroke) were identified by video analysis to calculate the time gaps between propulsive phases. The relationships among velocity, stroke rate, stroke length and coordination were modelled by polynomial regression. Twelve elite male swimmers swam at four race paces. Quadratic regression modelled the changes in spatial-temporal and coordinative parameters with velocity increases for all four strokes. First, the quadratic regression between coordination and velocity showed changes common to all four strokes. Notably, the time gaps between the key points defining the beginning and end of the stroke phases decreased with increases in velocity, which led to decreases in glide times and increases in the continuity between propulsive phases. Conjointly, the quadratic regression among stroke rate, stroke length and velocity was similar to the changes in coordination, suggesting that these parameters may influence coordination. The main practical application for coaches and scientists is that ineffective time gaps can be distinguished from those that simply reflect an individual swimmer's profile by monitoring the glide times within a stroke cycle. In the case of ineffective time gaps, targeted training could improve the swimmer's management of glide time.

  1. Analisis Portofolio Optimum Saham Syariah Menggunakan Liquidity Adjusted Capital Asset Pricing Model (LCAPM

    Directory of Open Access Journals (Sweden)

    Nila Cahyati

    2015-04-01

    Full Text Available Investasi mempunyai karakteristik antara return dan resiko. Pembentukan portofolio optimal digunakan untuk memaksimalkan keuntungan dan meminimumkan resiko. Liquidity Adjusted Capital Asset Pricing Model (LCAPM merupakan metode pengembangan baru dari CAPM yang dipengaruhi likuiditas. Indikator likuiditas apabila digabungkan dengan metode CAPM dapat membantu memaksimalkan return dan meminimumkan resiko. Tujuan penelitian adalah membandingkan expected retun dan resiko saham serta mengetahui proporsi pada portofolio optimal. Sampel yang digunakan merupakan saham JII (Jakarta Islamic Index  periode Januari 2013 – November 2014. Hasil penelitian menunjukkan bahwa expected return portofolio LCAPM sebesar 0,0956 dengan resiko 0,0043 yang membentuk proporsi saham AALI (55,19% dan saham PGAS (44,81%.

  2. An Efficient Bundle Adjustment Model Based on Parallax Parametrization for Environmental Monitoring

    Science.gov (United States)

    Chen, R.; Sun, Y. Y.; Lei, Y.

    2017-12-01

    With the rapid development of Unmanned Aircraft Systems (UAS), more and more research fields have been successfully equipped with this mature technology, among which is environmental monitoring. One difficult task is how to acquire accurate position of ground object in order to reconstruct the scene more accurate. To handle this problem, we combine bundle adjustment method from Photogrammetry with parallax parametrization from Computer Vision to create a new method call APCP (aerial polar-coordinate photogrammetry). One impressive advantage of this method compared with traditional method is that the 3-dimensional point in space is represented using three angles (elevation angle, azimuth angle and parallax angle) rather than the XYZ value. As the basis for APCP, bundle adjustment could be used to optimize the UAS sensors' pose accurately, reconstruct the 3D models of environment, thus serving as the criterion of accurate position for monitoring. To verity the effectiveness of the proposed method, we test on several UAV dataset obtained by non-metric digital cameras with large attitude angles, and we find that our methods could achieve 1 or 2 times better efficiency with no loss of accuracy than traditional ones. For the classical nonlinear optimization of bundle adjustment model based on the rectangular coordinate, it suffers the problem of being seriously dependent on the initial values, making it unable to converge fast or converge to a stable state. On the contrary, APCP method could deal with quite complex condition of UAS when conducting monitoring as it represent the points in space with angles, including the condition that the sequential images focusing on one object have zero parallax angle. In brief, this paper presents the parameterization of 3D feature points based on APCP, and derives a full bundle adjustment model and the corresponding nonlinear optimization problems based on this method. In addition, we analyze the influence of convergence and

  3. The Impact of Three Factors on the Recovery of Item Parameters for the Three-Parameter Logistic Model

    Science.gov (United States)

    Kim, Kyung Yong; Lee, Won-Chan

    2017-01-01

    This article provides a detailed description of three factors (specification of the ability distribution, numerical integration, and frame of reference for the item parameter estimates) that might affect the item parameter estimation of the three-parameter logistic model, and compares five item calibration methods, which are combinations of the…

  4. Modeling and Control of Adjustable Articulated Parallel Compliant Actuation Arrangements in Articulated Robots

    Directory of Open Access Journals (Sweden)

    Wesley Roozing

    2018-02-01

    Full Text Available Considerable advances in robotic actuation technology have been made in recent years. Particularly the use of compliance has increased, both as series elastic elements as well as in parallel to the main actuation drives. This work focuses on the model formulation and control of compliant actuation structures including multiple branches and multiarticulation, and significantly contributes by proposing an elegant modular formulation that describes the energy exchange between the compliant elements and articulated multibody robot dynamics using the concept of power flows, and a single matrix that describes the entire actuation topology. Using this formulation, a novel gradient descent based control law is derived for torque control of compliant actuation structures with adjustable pretension, with proven convexity for arbitrary actuation topologies. Extensions toward handling unidirectionality of elastic elements and joint motion compensation are also presented. A simulation study is performed on a 3-DoF leg model, where series-elastic main drives are augmented by parallel elastic tendons with adjustable pretension. Two actuation topologies are considered, one of which includes a biarticulated tendon. The data demonstrate the effectiveness of the proposed modeling and control methods. Furthermore, it is shown the biarticulated topology provides significant benefits over the monoarticulated arrangement.

  5. Local sensitivity analysis of a distributed parameters water quality model

    International Nuclear Information System (INIS)

    Pastres, R.; Franco, D.; Pecenik, G.; Solidoro, C.; Dejak, C.

    1997-01-01

    A local sensitivity analysis is presented of a 1D water-quality reaction-diffusion model. The model describes the seasonal evolution of one of the deepest channels of the lagoon of Venice, that is affected by nutrient loads from the industrial area and heat emission from a power plant. Its state variables are: water temperature, concentrations of reduced and oxidized nitrogen, Reactive Phosphorous (RP), phytoplankton, and zooplankton densities, Dissolved Oxygen (DO) and Biological Oxygen Demand (BOD). Attention has been focused on the identifiability and the ranking of the parameters related to primary production in different mixing conditions

  6. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-06

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  7. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models

    KAUST Repository

    Kalligiannaki, Evangelia

    2015-01-07

    We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics and finding the optimal parameter set for which the relative entropy rate with respect to the atomistic dynamics is minimized. The minimization problem leads to a generalization of the force matching methods to non equilibrium systems. A multiplicative noise example reveals the importance of the diffusion coefficient in the optimization problem.

  8. Obtaining adjusted prevalence ratios from logistic regression models in cross-sectional studies.

    Science.gov (United States)

    Bastos, Leonardo Soares; Oliveira, Raquel de Vasconcellos Carvalhaes de; Velasque, Luciane de Souza

    2015-03-01

    In the last decades, the use of the epidemiological prevalence ratio (PR) instead of the odds ratio has been debated as a measure of association in cross-sectional studies. This article addresses the main difficulties in the use of statistical models for the calculation of PR: convergence problems, availability of tools and inappropriate assumptions. We implement the direct approach to estimate the PR from binary regression models based on two methods proposed by Wilcosky & Chambless and compare with different methods. We used three examples and compared the crude and adjusted estimate of PR, with the estimates obtained by use of log-binomial, Poisson regression and the prevalence odds ratio (POR). PRs obtained from the direct approach resulted in values close enough to those obtained by log-binomial and Poisson, while the POR overestimated the PR. The model implemented here showed the following advantages: no numerical instability; assumes adequate probability distribution and, is available through the R statistical package.

  9. Empiric model for mean generation time adjustment factor for classic point kinetics equations

    International Nuclear Information System (INIS)

    Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C.

    2017-01-01

    Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)

  10. Empiric model for mean generation time adjustment factor for classic point kinetics equations

    Energy Technology Data Exchange (ETDEWEB)

    Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C., E-mail: david.goes@poli.ufrj.br, E-mail: aquilino@lmp.ufrj.br, E-mail: alessandro@con.ufrj.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Departamento de Engenharia Nuclear

    2017-11-01

    Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)

  11. Finding the effective parameter perturbations in atmospheric models: the LORENZ63 model as case study

    NARCIS (Netherlands)

    Moolenaar, H.E.; Selten, F.M.

    2004-01-01

    Climate models contain numerous parameters for which the numeric values are uncertain. In the context of climate simulation and prediction, a relevant question is what range of climate outcomes is possible given the range of parameter uncertainties. Which parameter perturbation changes the climate

  12. Comparison of parameter estimation algorithms in hydrological modelling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2006-01-01

    Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well...... for these types of models, although at a more expensive computational cost. The main purpose of this study is to investigate the performance of a global and a local parameter optimization algorithm, respectively, the Shuffled Complex Evolution (SCE) algorithm and the gradient-based Gauss......-Marquardt-Levenberg algorithm (implemented in the PEST software), when applied to a steady-state and a transient groundwater model. The results show that PEST can have severe problems in locating the global optimum and in being trapped in local regions of attractions. The global SCE procedure is, in general, more effective...

  13. Flare parameters inferred from a 3D loop model database

    Science.gov (United States)

    Cuambe, Valente A.; Costa, J. E. R.; Simões, P. J. A.

    2018-04-01

    We developed a database of pre-calculated flare images and spectra exploring a set of parameters which describe the physical characteristics of coronal loops and accelerated electron distribution. Due to the large number of parameters involved in describing the geometry and the flaring atmosphere in the model used (Costa et al. 2013), we built a large database of models (˜250 000) to facilitate the flare analysis. The geometry and characteristics of non-thermal electrons are defined on a discrete grid with spatial resolution greater than 4 arcsec. The database was constructed based on general properties of known solar flares and convolved with instrumental resolution to replicate the observations from the Nobeyama radio polarimeter (NoRP) spectra and Nobeyama radio-heliograph (NoRH) brightness maps. Observed spectra and brightness distribution maps are easily compared with the modelled spectra and images in the database, indicating a possible range of solutions. The parameter search efficiency in this finite database is discussed. Eight out of ten parameters analysed for one thousand simulated flare searches were recovered with a relative error of less than 20 per cent on average. In addition, from the analysis of the observed correlation between NoRH flare sizes and intensities at 17 GHz, some statistical properties were derived. From these statistics the energy spectral index was found to be δ ˜ 3, with non-thermal electron densities showing a peak distribution ⪅107 cm-3, and Bphotosphere ⪆2000 G. Some bias for larger loops with heights as great as ˜2.6 × 109 cm, and looptop events were noted. An excellent match of the spectrum and the brightness distribution at 17 and 34 GHz of the 2002 May 31 flare, is presented as well.

  14. Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models

    Science.gov (United States)

    Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea

    2014-05-01

    Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.

  15. Uncertainties in Tidally Adjusted Estimates of Sea Level Rise Flooding (Bathtub Model for the Greater London

    Directory of Open Access Journals (Sweden)

    Ali P. Yunus

    2016-04-01

    Full Text Available Sea-level rise (SLR from global warming may have severe consequences for coastal cities, particularly when combined with predicted increases in the strength of tidal surges. Predicting the regional impact of SLR flooding is strongly dependent on the modelling approach and accuracy of topographic data. Here, the areas under risk of sea water flooding for London boroughs were quantified based on the projected SLR scenarios reported in Intergovernmental Panel on Climate Change (IPCC fifth assessment report (AR5 and UK climatic projections 2009 (UKCP09 using a tidally-adjusted bathtub modelling approach. Medium- to very high-resolution digital elevation models (DEMs are used to evaluate inundation extents as well as uncertainties. Depending on the SLR scenario and DEMs used, it is estimated that 3%–8% of the area of Greater London could be inundated by 2100. The boroughs with the largest areas at risk of flooding are Newham, Southwark, and Greenwich. The differences in inundation areas estimated from a digital terrain model and a digital surface model are much greater than the root mean square error differences observed between the two data types, which may be attributed to processing levels. Flood models from SRTM data underestimate the inundation extent, so their results may not be reliable for constructing flood risk maps. This analysis provides a broad-scale estimate of the potential consequences of SLR and uncertainties in the DEM-based bathtub type flood inundation modelling for London boroughs.

  16. A review of distributed parameter groundwater management modeling methods

    Science.gov (United States)

    Gorelick, Steven M.

    1983-01-01

    Models which solve the governing groundwater flow or solute transport equations in conjunction with optimization techniques, such as linear and quadratic programing, are powerful aquifer management tools. Groundwater management models fall in two general categories: hydraulics or policy evaluation and water allocation. Groundwater hydraulic management models enable the determination of optimal locations and pumping rates of numerous wells under a variety of restrictions placed upon local drawdown, hydraulic gradients, and water production targets. Groundwater policy evaluation and allocation models can be used to study the influence upon regional groundwater use of institutional policies such as taxes and quotas. Furthermore, fairly complex groundwater-surface water allocation problems can be handled using system decomposition and multilevel optimization. Experience from the few real world applications of groundwater optimization-management techniques is summarized. Classified separately are methods for groundwater quality management aimed at optimal waste disposal in the subsurface. This classification is composed of steady state and transient management models that determine disposal patterns in such a way that water quality is protected at supply locations. Classes of research missing from the literature are groundwater quality management models involving nonlinear constraints, models which join groundwater hydraulic and quality simulations with political-economic management considerations, and management models that include parameter uncertainty.

  17. Some notes on unobserved parameters (frailties) in reliability modeling

    International Nuclear Information System (INIS)

    Cha, Ji Hwan; Finkelstein, Maxim

    2014-01-01

    Unobserved random quantities (frailties) often appear in various reliability problems especially when dealing with the failure rates of items from heterogeneous populations. As the failure rate is a conditional characteristic, the distributions of these random quantities, similar to Bayesian approaches, are updated in accordance with the corresponding survival information. At some instances, apart from a statistical meaning, frailties can have also useful interpretations describing the underlying lifetime model. We discuss and clarify these issues in reliability context and present and analyze several meaningful examples. We consider the proportional hazards model with a random factor; the stress–strength model, where the unobserved strength of a system can be viewed as frailty; a parallel system with a random number of components and, finally, the first passage time problem for the Wiener process with random parameters. - Highlights: • We discuss and clarify the notion of frailty in reliability context and present and analyze several meaningful examples. • The paper provides a new insight and general perspective on reliability models with unobserved parameters. • The main message of the paper is well illustrated by several meaningful examples and emphasized by detailed discussion

  18. Hydrological Modelling and Parameter Identification for Green Roof

    Science.gov (United States)

    Lo, W.; Tung, C.

    2012-12-01

    Green roofs, a multilayered system covered by plants, can be used to replace traditional concrete roofs as one of various measures to mitigate the increasing stormwater runoff in the urban environment. Moreover, facing the high uncertainty of the climate change, the present engineering method as adaptation may be regarded as improper measurements; reversely, green roofs are unregretful and flexible, and thus are rather important and suitable. The related technology has been developed for several years and the researches evaluating the stormwater reduction performance of green roofs are ongoing prosperously. Many European counties, cities in the U.S., and other local governments incorporate green roof into the stormwater control policy. Therefore, in terms of stormwater management, it is necessary to develop a robust hydrologic model to quantify the efficacy of green roofs over different types of designs and environmental conditions. In this research, a physical based hydrologic model is proposed to simulate water flowing process in the green roof system. In particular, the model adopts the concept of water balance, bringing a relatively simple and intuitive idea. Also, the research compares the two methods in the surface water balance calculation. One is based on Green-Ampt equation, and the other is under the SCS curve number calculation. A green roof experiment is designed to collect weather data and water discharge. Then, the proposed model is verified with these observed data; furthermore, the parameters using in the model are calibrated to find appropriate values in the green roof hydrologic simulation. This research proposes a simple physical based hydrologic model and the measures to determine parameters for the model.

  19. A joint logistic regression and covariate-adjusted continuous-time Markov chain model.

    Science.gov (United States)

    Rubin, Maria Laura; Chan, Wenyaw; Yamal, Jose-Miguel; Robertson, Claudia Sue

    2017-12-10

    The use of longitudinal measurements to predict a categorical outcome is an increasingly common goal in research studies. Joint models are commonly used to describe two or more models simultaneously by considering the correlated nature of their outcomes and the random error present in the longitudinal measurements. However, there is limited research on joint models with longitudinal predictors and categorical cross-sectional outcomes. Perhaps the most challenging task is how to model the longitudinal predictor process such that it represents the true biological mechanism that dictates the association with the categorical response. We propose a joint logistic regression and Markov chain model to describe a binary cross-sectional response, where the unobserved transition rates of a two-state continuous-time Markov chain are included as covariates. We use the method of maximum likelihood to estimate the parameters of our model. In a simulation study, coverage probabilities of about 95%, standard deviations close to standard errors, and low biases for the parameter values show that our estimation method is adequate. We apply the proposed joint model to a dataset of patients with traumatic brain injury to describe and predict a 6-month outcome based on physiological data collected post-injury and admission characteristics. Our analysis indicates that the information provided by physiological changes over time may help improve prediction of long-term functional status of these severely ill subjects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Modelling Technical and Economic Parameters in Selection of Manufacturing Devices

    Directory of Open Access Journals (Sweden)

    Naqib Daneshjo

    2017-11-01

    Full Text Available Sustainable science and technology development is also conditioned by continuous development of means of production which have a key role in structure of each production system. Mechanical nature of the means of production is complemented by controlling and electronic devices in context of intelligent industry. A selection of production machines for a technological process or technological project has so far been practically resolved, often only intuitively. With regard to increasing intelligence, the number of variable parameters that have to be considered when choosing a production device is also increasing. It is necessary to use computing techniques and decision making methods according to heuristic methods and more precise methodological procedures during the selection. The authors present an innovative model for optimization of technical and economic parameters in the selection of manufacturing devices for industry 4.0.

  1. Dynamic systems models new methods of parameter and state estimation

    CERN Document Server

    2016-01-01

    This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...

  2. Parameter Estimation for a Class of Lifetime Models

    Directory of Open Access Journals (Sweden)

    Xinyang Ji

    2014-01-01

    Full Text Available Our purpose in this paper is to present a better method of parametric estimation for a bivariate nonlinear regression model, which takes the performance indicator of rubber aging as the dependent variable and time and temperature as the independent variables. We point out that the commonly used two-step method (TSM, which splits the model and estimate parameters separately, has limitation. Instead, we apply the Marquardt’s method (MM to implement parametric estimation directly for the model and compare these two methods of parametric estimation by random simulation. Our results show that MM has better effect of data fitting, more reasonable parametric estimates, and smaller prediction error compared with TSM.

  3. The parameter space of Cubic Galileon models for cosmic acceleration

    CERN Document Server

    Bellini, Emilio

    2013-01-01

    We use recent measurements of the expansion history of the universe to place constraints on the parameter space of cubic Galileon models. This gives strong constraints on the Lagrangian of these models. Most dynamical terms in the Galileon Lagrangian are constraint to be small and the acceleration is effectively provided by a constant term in the scalar potential, thus reducing, effectively, to a LCDM model for current acceleration. The effective equation of state is indistinguishable from that of a cosmological constant w = -1 and the data constraint it to have no temporal variations of more than at the few % level. The energy density of the Galileon can contribute only to about 10% of the acceleration energy density, being the other 90% a cosmological constant term. This demonstrates how useful direct measurements of the expansion history of the universe are at constraining the dynamical nature of dark energy.

  4. Lower-order effects adjustment in quantitative traits model-based multifactor dimensionality reduction.

    Science.gov (United States)

    Mahachie John, Jestinah M; Cattaert, Tom; Lishout, François Van; Gusareva, Elena S; Steen, Kristel Van

    2012-01-01

    Identifying gene-gene interactions or gene-environment interactions in studies of human complex diseases remains a big challenge in genetic epidemiology. An additional challenge, often forgotten, is to account for important lower-order genetic effects. These may hamper the identification of genuine epistasis. If lower-order genetic effects contribute to the genetic variance of a trait, identified statistical interactions may simply be due to a signal boost of these effects. In this study, we restrict attention to quantitative traits and bi-allelic SNPs as genetic markers. Moreover, our interaction study focuses on 2-way SNP-SNP interactions. Via simulations, we assess the performance of different corrective measures for lower-order genetic effects in Model-Based Multifactor Dimensionality Reduction epistasis detection, using additive and co-dominant coding schemes. Performance is evaluated in terms of power and familywise error rate. Our simulations indicate that empirical power estimates are reduced with correction of lower-order effects, likewise familywise error rates. Easy-to-use automatic SNP selection procedures, SNP selection based on "top" findings, or SNP selection based on p-value criterion for interesting main effects result in reduced power but also almost zero false positive rates. Always accounting for main effects in the SNP-SNP pair under investigation during Model-Based Multifactor Dimensionality Reduction analysis adequately controls false positive epistasis findings. This is particularly true when adopting a co-dominant corrective coding scheme. In conclusion, automatic search procedures to identify lower-order effects to correct for during epistasis screening should be avoided. The same is true for procedures that adjust for lower-order effects prior to Model-Based Multifactor Dimensionality Reduction and involve using residuals as the new trait. We advocate using "on-the-fly" lower-order effects adjusting when screening for SNP-SNP interactions

  5. Analysis of case-parent trios for imprinting effect using a loglinear model with adjustment for sex-of-parent-specific transmission ratio distortion

    DEFF Research Database (Denmark)

    Huang, Lam Opal; Infante-Rivard, Claire; Labbe, Aurélie

    2017-01-01

    of the minor allele in control-trios can be added to the loglinear model to adjust for TRD. Adjusting the model removes the inflation in the genotype relative risk (RR) estimate and Type 1 error introduced by non-sex-of-parent-specific TRD. We now propose to further extend this model to estimate an imprinting......Transmission ratio distortion (TRD) is a phenomenon where parental transmission of disease allele to the child does not follow the Mendelian inheritance ratio. TRD occurs in a sex-of-parent-specific or non-sex-of-parent-specific manner. An offset computed from the transmission probability...... parameter. Some evidence suggests that more than 1% of all mammalian genes are imprinted. In the presence of imprinting, for example, the offspring inheriting an over-transmitted disease allele from the parent with a higher expression level in a neighboring gene is over-represented in the sample. TRD...

  6. The Trauma Outcome Process Assessment Model: A Structural Equation Model Examination of Adjustment

    Science.gov (United States)

    Borja, Susan E.; Callahan, Jennifer L.

    2009-01-01

    This investigation sought to operationalize a comprehensive theoretical model, the Trauma Outcome Process Assessment, and test it empirically with structural equation modeling. The Trauma Outcome Process Assessment reflects a robust body of research and incorporates known ecological factors (e.g., family dynamics, social support) to explain…

  7. Analysis of Model Parameters for a Polymer Filtration Simulator

    Directory of Open Access Journals (Sweden)

    N. Brackett-Rozinsky

    2011-01-01

    Full Text Available We examine a simulation model for polymer extrusion filters and determine its sensitivity to filter parameters. The simulator is a three-dimensional, time-dependent discretization of a coupled system of nonlinear partial differential equations used to model fluid flow and debris transport, along with statistical relationships that define debris distributions and retention probabilities. The flow of polymer fluid, and suspended debris particles, is tracked to determine how well a filter performs and how long it operates before clogging. A filter may have multiple layers, characterized by thickness, porosity, and average pore diameter. In this work, the thickness of each layer is fixed, while the porosities and pore diameters vary for a two-layer and three-layer study. The effects of porosity and average pore diameter on the measures of filter quality are calculated. For the three layer model, these effects are tested for statistical significance using analysis of variance. Furthermore, the effects of each pair of interacting parameters are considered. This allows the detection of complexity, where in changing two aspects of a filter together may generate results substantially different from what occurs when those same aspects change separately. The principal findings indicate that the first layer of a filter is the most important.

  8. Optimization of Experimental Model Parameter Identification for Energy Storage Systems

    Directory of Open Access Journals (Sweden)

    Rosario Morello

    2013-09-01

    Full Text Available The smart grid approach is envisioned to take advantage of all available modern technologies in transforming the current power system to provide benefits to all stakeholders in the fields of efficient energy utilisation and of wide integration of renewable sources. Energy storage systems could help to solve some issues that stem from renewable energy usage in terms of stabilizing the intermittent energy production, power quality and power peak mitigation. With the integration of energy storage systems into the smart grids, their accurate modeling becomes a necessity, in order to gain robust real-time control on the network, in terms of stability and energy supply forecasting. In this framework, this paper proposes a procedure to identify the values of the battery model parameters in order to best fit experimental data and integrate it, along with models of energy sources and electrical loads, in a complete framework which represents a real time smart grid management system. The proposed method is based on a hybrid optimisation technique, which makes combined use of a stochastic and a deterministic algorithm, with low computational burden and can therefore be repeated over time in order to account for parameter variations due to the battery’s age and usage.

  9. Applying Atmospheric Measurements to Constrain Parameters of Terrestrial Source Models

    Science.gov (United States)

    Hyer, E. J.; Kasischke, E. S.; Allen, D. J.

    2004-12-01

    Quantitative inversions of atmospheric measurements have been widely applied to constrain atmospheric budgets of a range of trace gases. Experiments of this type have revealed persistent discrepancies between 'bottom-up' and 'top-down' estimates of source magnitudes. The most common atmospheric inversion uses the absolute magnitude as the sole parameter for each source, and returns the optimal value of that parameter. In order for atmospheric measurements to be useful for improving 'bottom-up' models of terrestrial sources, information about other properties of the sources must be extracted. As the density and quality of atmospheric trace gas measurements improve, examination of higher-order properties of trace gas sources should become possible. Our model of boreal forest fire emissions is parameterized to permit flexible examination of the key uncertainties in this source. Using output from this model together with the UM CTM, we examined the sensitivity of CO concentration measurements made by the MOPITT instrument to various uncertainties in the boreal source: geographic distribution of burned area, fire type (crown fires vs. surface fires), and fuel consumption in above-ground and ground-layer fuels. Our results indicate that carefully designed inversion experiments have the potential to help constrain not only the absolute magnitudes of terrestrial sources, but also the key uncertainties associated with 'bottom-up' estimates of those sources.

  10. Bayesian parameter estimation for stochastic models of biological cell migration

    Science.gov (United States)

    Dieterich, Peter; Preuss, Roland

    2013-08-01

    Cell migration plays an essential role under many physiological and patho-physiological conditions. It is of major importance during embryonic development and wound healing. In contrast, it also generates negative effects during inflammation processes, the transmigration of tumors or the formation of metastases. Thus, a reliable quantification and characterization of cell paths could give insight into the dynamics of these processes. Typically stochastic models are applied where parameters are extracted by fitting models to the so-called mean square displacement of the observed cell group. We show that this approach has several disadvantages and problems. Therefore, we propose a simple procedure directly relying on the positions of the cell's trajectory and the covariance matrix of the positions. It is shown that the covariance is identical with the spatial aging correlation function for the supposed linear Gaussian models of Brownian motion with drift and fractional Brownian motion. The technique is applied and illustrated with simulated data showing a reliable parameter estimation from single cell paths.

  11. Application of a free parameter model to plastic scintillation samples

    Energy Technology Data Exchange (ETDEWEB)

    Tarancon Sanz, Alex, E-mail: alex.tarancon@ub.edu [Departament de Quimica Analitica, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain); Kossert, Karsten, E-mail: Karsten.Kossert@ptb.de [Physikalisch-Technische Bundesanstalt (PTB), Bundesallee 100, 38116 Braunschweig (Germany)

    2011-08-21

    In liquid scintillation (LS) counting, the CIEMAT/NIST efficiency tracing method and the triple-to-double coincidence ratio (TDCR) method have proved their worth for reliable activity measurements of a number of radionuclides. In this paper, an extended approach to apply a free-parameter model to samples containing a mixture of solid plastic scintillation microspheres and radioactive aqueous solutions is presented. Several beta-emitting radionuclides were measured in a TDCR system at PTB. For the application of the free parameter model, the energy loss in the aqueous phase must be taken into account, since this portion of the particle energy does not contribute to the creation of scintillation light. The energy deposit in the aqueous phase is determined by means of Monte Carlo calculations applying the PENELOPE software package. To this end, great efforts were made to model the geometry of the samples. Finally, a new geometry parameter was defined, which was determined by means of a tracer radionuclide with known activity. This makes the analysis of experimental TDCR data of other radionuclides possible. The deviations between the determined activity concentrations and reference values were found to be lower than 3%. The outcome of this research work is also important for a better understanding of liquid scintillation counting. In particular the influence of (inverse) micelles, i.e. the aqueous spaces embedded in the organic scintillation cocktail, can be investigated. The new approach makes clear that it is important to take the energy loss in the aqueous phase into account. In particular for radionuclides emitting low-energy electrons (e.g. M-Auger electrons from {sup 125}I), this effect can be very important.

  12. Microbial Communities Model Parameter Calculation for TSPA/SR

    Energy Technology Data Exchange (ETDEWEB)

    D. Jolley

    2001-07-16

    This calculation has several purposes. First the calculation reduces the information contained in ''Committed Materials in Repository Drifts'' (BSC 2001a) to useable parameters required as input to MING V1.O (CRWMS M&O 1998, CSCI 30018 V1.O) for calculation of the effects of potential in-drift microbial communities as part of the microbial communities model. The calculation is intended to replace the parameters found in Attachment II of the current In-Drift Microbial Communities Model revision (CRWMS M&O 2000c) with the exception of Section 11-5.3. Second, this calculation provides the information necessary to supercede the following DTN: M09909SPAMING1.003 and replace it with a new qualified dataset (see Table 6.2-1). The purpose of this calculation is to create the revised qualified parameter input for MING that will allow {Delta}G (Gibbs Free Energy) to be corrected for long-term changes to the temperature of the near-field environment. Calculated herein are the quadratic or second order regression relationships that are used in the energy limiting calculations to potential growth of microbial communities in the in-drift geochemical environment. Third, the calculation performs an impact review of a new DTN: M00012MAJIONIS.000 that is intended to replace the currently cited DTN: GS9809083 12322.008 for water chemistry data used in the current ''In-Drift Microbial Communities Model'' revision (CRWMS M&O 2000c). Finally, the calculation updates the material lifetimes reported on Table 32 in section 6.5.2.3 of the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000c) based on the inputs reported in BSC (2001a). Changes include adding new specified materials and updating old materials information that has changed.

  13. Microbial Communities Model Parameter Calculation for TSPA/SR

    International Nuclear Information System (INIS)

    D. Jolley

    2001-01-01

    This calculation has several purposes. First the calculation reduces the information contained in ''Committed Materials in Repository Drifts'' (BSC 2001a) to useable parameters required as input to MING V1.O (CRWMS M and O 1998, CSCI 30018 V1.O) for calculation of the effects of potential in-drift microbial communities as part of the microbial communities model. The calculation is intended to replace the parameters found in Attachment II of the current In-Drift Microbial Communities Model revision (CRWMS M and O 2000c) with the exception of Section 11-5.3. Second, this calculation provides the information necessary to supercede the following DTN: M09909SPAMING1.003 and replace it with a new qualified dataset (see Table 6.2-1). The purpose of this calculation is to create the revised qualified parameter input for MING that will allow ΔG (Gibbs Free Energy) to be corrected for long-term changes to the temperature of the near-field environment. Calculated herein are the quadratic or second order regression relationships that are used in the energy limiting calculations to potential growth of microbial communities in the in-drift geochemical environment. Third, the calculation performs an impact review of a new DTN: M00012MAJIONIS.000 that is intended to replace the currently cited DTN: GS9809083 12322.008 for water chemistry data used in the current ''In-Drift Microbial Communities Model'' revision (CRWMS M and O 2000c). Finally, the calculation updates the material lifetimes reported on Table 32 in section 6.5.2.3 of the ''In-Drift Microbial Communities'' AMR (CRWMS M and O 2000c) based on the inputs reported in BSC (2001a). Changes include adding new specified materials and updating old materials information that has changed

  14. Modelled basic parameters for semi-industrial irradiation plant design

    International Nuclear Information System (INIS)

    Mangussi, J.

    2009-01-01

    The basic parameters of an irradiation plant design are the total activity, the product uniformity ratio and the efficiency process. The target density, the minimum dose required and the throughput depends on the use to which the irradiator will be put at. In this work, a model for calculating the specific dose rate at several depths in an infinite homogeneous medium produced by a slab source irradiator is presented. The product minimum dose rate for a set of target thickness is obtained. The design method steps are detailed and an illustrative example is presented. (author)

  15. Lumped-parameter fuel rod model for rapid thermal transients

    International Nuclear Information System (INIS)

    Perkins, K.R.; Ramshaw, J.D.

    1975-07-01

    The thermal behavior of fuel rods during simulated accident conditions is extremely sensitive to the heat transfer coefficient which is, in turn, very sensitive to the cladding surface temperature and the fluid conditions. The development of a semianalytical, lumped-parameter fuel rod model which is intended to provide accurate calculations, in a minimum amount of computer time, of the thermal response of fuel rods during a simulated loss-of-coolant accident is described. The results show good agreement with calculations from a comprehensive fuel-rod code (FRAP-T) currently in use at Aerojet Nuclear Company

  16. Taming Many-Parameter BSM Models with Bayesian Neural Networks

    Science.gov (United States)

    Kuchera, M. P.; Karbo, A.; Prosper, H. B.; Sanchez, A.; Taylor, J. Z.

    2017-09-01

    The search for physics Beyond the Standard Model (BSM) is a major focus of large-scale high energy physics experiments. One method is to look for specific deviations from the Standard Model that are predicted by BSM models. In cases where the model has a large number of free parameters, standard search methods become intractable due to computation time. This talk presents results using Bayesian Neural Networks, a supervised machine learning method, to enable the study of higher-dimensional models. The popular phenomenological Minimal Supersymmetric Standard Model was studied as an example of the feasibility and usefulness of this method. Graphics Processing Units (GPUs) are used to expedite the calculations. Cross-section predictions for 13 TeV proton collisions will be presented. My participation in the Conference Experience for Undergraduates (CEU) in 2004-2006 exposed me to the national and global significance of cutting-edge research. At the 2005 CEU, I presented work from the previous summer's SULI internship at Lawrence Berkeley Laboratory, where I learned to program while working on the Majorana Project. That work inspired me to follow a similar research path, which led me to my current work on computational methods applied to BSM physics.

  17. Bayesian analysis of inflation: Parameter estimation for single field models

    International Nuclear Information System (INIS)

    Mortonson, Michael J.; Peiris, Hiranya V.; Easther, Richard

    2011-01-01

    Future astrophysical data sets promise to strengthen constraints on models of inflation, and extracting these constraints requires methods and tools commensurate with the quality of the data. In this paper we describe ModeCode, a new, publicly available code that computes the primordial scalar and tensor power spectra for single-field inflationary models. ModeCode solves the inflationary mode equations numerically, avoiding the slow roll approximation. It is interfaced with CAMB and CosmoMC to compute cosmic microwave background angular power spectra and perform likelihood analysis and parameter estimation. ModeCode is easily extendable to additional models of inflation, and future updates will include Bayesian model comparison. Errors from ModeCode contribute negligibly to the error budget for analyses of data from Planck or other next generation experiments. We constrain representative single-field models (φ n with n=2/3, 1, 2, and 4, natural inflation, and 'hilltop' inflation) using current data, and provide forecasts for Planck. From current data, we obtain weak but nontrivial limits on the post-inflationary physics, which is a significant source of uncertainty in the predictions of inflationary models, while we find that Planck will dramatically improve these constraints. In particular, Planck will link the inflationary dynamics with the post-inflationary growth of the horizon, and thus begin to probe the ''primordial dark ages'' between TeV and grand unified theory scale energies.

  18. Measurement of the Economic Growth and Add-on of the R.M. Solow Adjusted Model

    Directory of Open Access Journals (Sweden)

    Ion Gh. Rosca

    2007-08-01

    Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans model etc., the R.M. Solow model is part of the category which characterizes the economic growth.The paper aim is the economic growth measurement and add-on of the R.M. Solow adjusted model.

  19. Empirically modelled Pc3 activity based on solar wind parameters

    Directory of Open Access Journals (Sweden)

    B. Heilig

    2010-09-01

    Full Text Available It is known that under certain solar wind (SW/interplanetary magnetic field (IMF conditions (e.g. high SW speed, low cone angle the occurrence of ground-level Pc3–4 pulsations is more likely. In this paper we demonstrate that in the event of anomalously low SW particle density, Pc3 activity is extremely low regardless of otherwise favourable SW speed and cone angle. We re-investigate the SW control of Pc3 pulsation activity through a statistical analysis and two empirical models with emphasis on the influence of SW density on Pc3 activity. We utilise SW and IMF measurements from the OMNI project and ground-based magnetometer measurements from the MM100 array to relate SW and IMF measurements to the occurrence of Pc3 activity. Multiple linear regression and artificial neural network models are used in iterative processes in order to identify sets of SW-based input parameters, which optimally reproduce a set of Pc3 activity data. The inclusion of SW density in the parameter set significantly improves the models. Not only the density itself, but other density related parameters, such as the dynamic pressure of the SW, or the standoff distance of the magnetopause work equally well in the model. The disappearance of Pc3s during low-density events can have at least four reasons according to the existing upstream wave theory: 1. Pausing the ion-cyclotron resonance that generates the upstream ultra low frequency waves in the absence of protons, 2. Weakening of the bow shock that implies less efficient reflection, 3. The SW becomes sub-Alfvénic and hence it is not able to sweep back the waves propagating upstream with the Alfvén-speed, and 4. The increase of the standoff distance of the magnetopause (and of the bow shock. Although the models cannot account for the lack of Pc3s during intervals when the SW density is extremely low, the resulting sets of optimal model inputs support the generation of mid latitude Pc3 activity predominantly through

  20. Modelling of bio-optical parameters of open ocean waters

    Directory of Open Access Journals (Sweden)

    Vadim N. Pelevin

    2001-12-01

    Full Text Available An original method for estimating the concentration of chlorophyll pigments, absorption of yellow substance and absorption of suspended matter without pigments and yellow substance in detritus using spectral diffuse attenuation coefficient for downwelling irradiance and irradiance reflectance data has been applied to sea waters of different types in the open ocean (case 1. Using the effective numerical single parameter classification with the water type optical index m as a parameter over the whole range of the open ocean waters, the calculations have been carried out and the light absorption spectra of sea waters tabulated. These spectra are used to optimize the absorption models and thus to estimate the concentrations of the main admixtures in sea water. The value of m can be determined from direct measurements of the downward irradiance attenuation coefficient at 500 nm or calculated from remote sensing data using the regressions given in the article. The sea water composition can then be readily estimated from the tables given for any open ocean area if that one parameter m characterizing the basin is known.

  1. Application of regression model on stream water quality parameters

    International Nuclear Information System (INIS)

    Suleman, M.; Maqbool, F.; Malik, A.H.; Bhatti, Z.A.

    2012-01-01

    Statistical analysis was conducted to evaluate the effect of solid waste leachate from the open solid waste dumping site of Salhad on the stream water quality. Five sites were selected along the stream. Two sites were selected prior to mixing of leachate with the surface water. One was of leachate and other two sites were affected with leachate. Samples were analyzed for pH, water temperature, electrical conductivity (EC), total dissolved solids (TDS), Biological oxygen demand (BOD), chemical oxygen demand (COD), dissolved oxygen (DO) and total bacterial load (TBL). In this study correlation coefficient r among different water quality parameters of various sites were calculated by using Pearson model and then average of each correlation between two parameters were also calculated, which shows TDS and EC and pH and BOD have significantly increasing r value, while temperature and TDS, temp and EC, DO and BL, DO and COD have decreasing r value. Single factor ANOVA at 5% level of significance was used which shows EC, TDS, TCL and COD were significantly differ among various sites. By the application of these two statistical approaches TDS and EC shows strongly positive correlation because the ions from the dissolved solids in water influence the ability of that water to conduct an electrical current. These two parameters significantly vary among 5 sites which are further confirmed by using linear regression. (author)

  2. Adjusting Satellite Rainfall Error in Mountainous Areas for Flood Modeling Applications

    Science.gov (United States)

    Zhang, X.; Anagnostou, E. N.; Astitha, M.; Vergara, H. J.; Gourley, J. J.; Hong, Y.

    2014-12-01

    This study aims to investigate the use of high-resolution Numerical Weather Prediction (NWP) for evaluating biases of satellite rainfall estimates of flood-inducing storms in mountainous areas and associated improvements in flood modeling. Satellite-retrieved precipitation has been considered as a feasible data source for global-scale flood modeling, given that satellite has the spatial coverage advantage over in situ (rain gauges and radar) observations particularly over mountainous areas. However, orographically induced heavy precipitation events tend to be underestimated and spatially smoothed by satellite products, which error propagates non-linearly in flood simulations.We apply a recently developed retrieval error and resolution effect correction method (Zhang et al. 2013*) on the NOAA Climate Prediction Center morphing technique (CMORPH) product based on NWP analysis (or forecasting in the case of real-time satellite products). The NWP rainfall is derived from the Weather Research and Forecasting Model (WRF) set up with high spatial resolution (1-2 km) and explicit treatment of precipitation microphysics.In this study we will show results on NWP-adjusted CMORPH rain rates based on tropical cyclones and a convective precipitation event measured during NASA's IPHEX experiment in the South Appalachian region. We will use hydrologic simulations over different basins in the region to evaluate propagation of bias correction in flood simulations. We show that the adjustment reduced the underestimation of high rain rates thus moderating the strong rainfall magnitude dependence of CMORPH rainfall bias, which results in significant improvement in flood peak simulations. Further study over Blue Nile Basin (western Ethiopia) will be investigated and included in the presentation. *Zhang, X. et al. 2013: Using NWP Simulations in Satellite Rainfall Estimation of Heavy Precipitation Events over Mountainous Areas. J. Hydrometeor, 14, 1844-1858.

  3. Adjusting Felder-Silverman learning styles model for application in adaptive e-learning

    Directory of Open Access Journals (Sweden)

    Mihailović Đorđe

    2012-01-01

    Full Text Available This paper presents an approach for adjusting Felder-Silverman learning styles model for application in development of adaptive e-learning systems. Main goal of the paper is to improve the existing e-learning courses by developing a method for adaptation based on learning styles. The proposed method includes analysis of data related to students characteristics and applying the concept of personalization in creating e-learning courses. The research has been conducted at Faculty of organizational sciences, University of Belgrade, during winter semester of 2009/10, on sample of 318 students. The students from the experimental group were divided in three clusters, based on data about their styles identified using adjusted Felder-Silverman questionnaire. Data about learning styles collected during the research were used to determine typical groups of students and then to classify students into these groups. The classification was performed using data mining techniques. Adaptation of the e-learning courses was implemented according to results of data analysis. Evaluation showed that there was statistically significant difference in the results of students who attended the course adapted by using the described method, in comparison with results of students who attended course that was not adapted.

  4. Risk-adjusted performance evaluation in three academic thoracic surgery units using the Eurolung risk models.

    Science.gov (United States)

    Pompili, Cecilia; Shargall, Yaron; Decaluwe, Herbert; Moons, Johnny; Chari, Madhu; Brunelli, Alessandro

    2018-01-03

    The objective of this study was to evaluate the performance of 3 thoracic surgery centres using the Eurolung risk models for morbidity and mortality. This was a retrospective analysis performed on data collected from 3 academic centres (2014-2016). Seven hundred and twenty-one patients in Centre 1, 857 patients in Centre 2 and 433 patients in Centre 3 who underwent anatomical lung resections were analysed. The Eurolung1 and Eurolung2 models were used to predict risk-adjusted cardiopulmonary morbidity and 30-day mortality rates. Observed and risk-adjusted outcomes were compared within each centre. The observed morbidity of Centre 1 was in line with the predicted morbidity (observed 21.1% vs predicted 22.7%, P = 0.31). Centre 2 performed better than expected (observed morbidity 20.2% vs predicted 26.7%, P < 0.001), whereas the observed morbidity of Centre 3 was higher than the predicted morbidity (observed 41.1% vs predicted 24.3%, P < 0.001). Centre 1 had higher observed mortality when compared with the predicted mortality (3.6% vs 2.1%, P = 0.005), whereas Centre 2 had an observed mortality rate significantly lower than the predicted mortality rate (1.2% vs 2.5%, P = 0.013). Centre 3 had an observed mortality rate in line with the predicted mortality rate (observed 1.4% vs predicted 2.4%, P = 0.17). The observed mortality rates in the patients with major complications were 30.8% in Centre 1 (versus predicted mortality rate 3.8%, P < 0.001), 8.2% in Centre 2 (versus predicted mortality rate 4.1%, P = 0.030) and 9.0% in Centre 3 (versus predicted mortality rate 3.5%, P = 0.014). The Eurolung models were successfully used as risk-adjusting instruments to internally audit the outcomes of 3 different centres, showing their applicability for future quality improvement initiatives. © The Author(s) 2018. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  5. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    Science.gov (United States)

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.

  6. Convergence of surface diffusion parameters with model crystal size

    Science.gov (United States)

    Cohen, Jennifer M.; Voter, Arthur F.

    1994-07-01

    A study of the variation in the calculated quantities for adatom diffusion with respect to the size of the model crystal is presented. The reported quantities include surface diffusion barrier heights, pre-exponential factors, and dynamical correction factors. Embedded atom method (EAM) potentials were used throughout this effort. Both the layer size and the depth of the crystal were found to influence the values of the Arrhenius factors significantly. In particular, exchange type mechanisms required a significantly larger model than standard hopping mechanisms to determine adatom diffusion barriers of equivalent accuracy. The dynamical events that govern the corrections to transition state theory (TST) did not appear to be as sensitive to crystal depth. Suitable criteria for the convergence of the diffusion parameters with regard to the rate properties are illustrated.

  7. Diabatic models with transferrable parameters for generalized chemical reactions

    Science.gov (United States)

    Reimers, Jeffrey R.; McKemmish, Laura K.; McKenzie, Ross H.; Hush, Noel S.

    2017-05-01

    Diabatic models applied to adiabatic electron-transfer theory yield many equations involving just a few parameters that connect ground-state geometries and vibration frequencies to excited-state transition energies and vibration frequencies to the rate constants for electron-transfer reactions, utilizing properties of the conical-intersection seam linking the ground and excited states through the Pseudo Jahn-Teller effect. We review how such simplicity in basic understanding can also be obtained for general chemical reactions. The key feature that must be recognized is that electron-transfer (or hole transfer) processes typically involve one electron (hole) moving between two orbitals, whereas general reactions typically involve two electrons or even four electrons for processes in aromatic molecules. Each additional moving electron leads to new high-energy but interrelated conical-intersection seams that distort the shape of the critical lowest-energy seam. Recognizing this feature shows how conical-intersection descriptors can be transferred between systems, and how general chemical reactions can be compared using the same set of simple parameters. Mathematical relationships are presented depicting how different conical-intersection seams relate to each other, showing that complex problems can be reduced into an effective interaction between the ground-state and a critical excited state to provide the first semi-quantitative implementation of Shaik’s “twin state” concept. Applications are made (i) demonstrating why the chemistry of the first-row elements is qualitatively so different to that of the second and later rows, (ii) deducing the bond-length alternation in hypothetical cyclohexatriene from the observed UV spectroscopy of benzene, (iii) demonstrating that commonly used procedures for modelling surface hopping based on inclusion of only the first-derivative correction to the Born-Oppenheimer approximation are valid in no region of the chemical

  8. Piecewise Model and Parameter Obtainment of Governor Actuator in Turbine

    Directory of Open Access Journals (Sweden)

    Jie Zhao

    2015-01-01

    Full Text Available The governor actuators in some heat-engine plants have nonlinear valves. This nonlinearity of valves may lead to the inaccuracy of the opening and closing time constants calculated based on the whole segment fully open and fully close experimental test curves of the valve. An improved mathematical model of the turbine governor actuator is proposed to reflect the nonlinearity of the valve, in which the main and auxiliary piecewise opening and closing time constants instead of the fixed oil motive opening and closing time constants are adopted to describe the characteristics of the actuator. The main opening and closing time constants are obtained from the linear segments of the whole fully open and close curves. The parameters of proportional integral derivative (PID controller are identified based on the small disturbance experimental tests of the valve. Then the auxiliary opening and closing time constants and the piecewise opening and closing valve points are determined by the fully open/close experimental tests. Several testing functions are selected to compare genetic algorithm and particle swarm optimization algorithm (GA-PSO with other basic intelligence algorithms. The effectiveness of the piecewise linear model and its parameters are validated by practical power plant case studies.

  9. Standard model parameters and the search for new physics

    International Nuclear Information System (INIS)

    Marciano, W.J.

    1988-04-01

    In these lectures, my aim is to present an up-to-date status report on the standard model and some key tests of electroweak unification. Within that context, I also discuss how and where hints of new physics may emerge. To accomplish those goals, I have organized my presentation as follows: I discuss the standard model parameters with particular emphasis on the gauge coupling constants and vector boson masses. Examples of new physics appendages are also briefly commented on. In addition, because these lectures are intended for students and thus somewhat pedagogical, I have included an appendix on dimensional regularization and a simple computational example that employs that technique. Next, I focus on weak charged current phenomenology. Precision tests of the standard model are described and up-to-date values for the Cabibbo-Kobayashi-Maskawa (CKM) mixing matrix parameters are presented. Constraints implied by those tests for a 4th generation, supersymmetry, extra Z/prime/ bosons, and compositeness are also discussed. I discuss weak neutral current phenomenology and the extraction of sin/sup 2/ /theta//sub W/ from experiment. The results presented there are based on a recently completed global analysis of all existing data. I have chosen to concentrate that discussion on radiative corrections, the effect of a heavy top quark mass, and implications for grand unified theories (GUTS). The potential for further experimental progress is also commented on. I depart from the narrowest version of the standard model and discuss effects of neutrino masses and mixings. I have chosen to concentrate on oscillations, the Mikheyev-Smirnov- Wolfenstein (MSW) effect, and electromagnetic properties of neutrinos. On the latter topic, I will describe some recent work on resonant spin-flavor precession. Finally, I conclude with a prospectus on hopes for the future. 76 refs

  10. Inverse modeling of hydrologic parameters using surface flux and runoff observations in the Community Land Model

    Science.gov (United States)

    Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. Ruby

    2013-12-01

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Both deterministic least-square fitting and stochastic Markov-chain Monte Carlo (MCMC)-Bayesian inversion approaches are evaluated by applying them to CLM4 at selected sites with different climate and soil conditions. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the sampling-based stochastic inversion approaches provides significant improvements in the model simulations compared to using default CLM4 parameter values, and that as more information comes in, the predictive intervals (ranges of posterior distributions) of the calibrated parameters become narrower. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.

  11. Adjusting for Confounding in Early Postlaunch Settings: Going Beyond Logistic Regression Models.

    Science.gov (United States)

    Schmidt, Amand F; Klungel, Olaf H; Groenwold, Rolf H H

    2016-01-01

    Postlaunch data on medical treatments can be analyzed to explore adverse events or relative effectiveness in real-life settings. These analyses are often complicated by the number of potential confounders and the possibility of model misspecification. We conducted a simulation study to compare the performance of logistic regression, propensity score, disease risk score, and stabilized inverse probability weighting methods to adjust for confounding. Model misspecification was induced in the independent derivation dataset. We evaluated performance using relative bias confidence interval coverage of the true effect, among other metrics. At low events per coefficient (1.0 and 0.5), the logistic regression estimates had a large relative bias (greater than -100%). Bias of the disease risk score estimates was at most 13.48% and 18.83%. For the propensity score model, this was 8.74% and >100%, respectively. At events per coefficient of 1.0 and 0.5, inverse probability weighting frequently failed or reduced to a crude regression, resulting in biases of -8.49% and 24.55%. Coverage of logistic regression estimates became less than the nominal level at events per coefficient ≤5. For the disease risk score, inverse probability weighting, and propensity score, coverage became less than nominal at events per coefficient ≤2.5, ≤1.0, and ≤1.0, respectively. Bias of misspecified disease risk score models was 16.55%. In settings with low events/exposed subjects per coefficient, disease risk score methods can be useful alternatives to logistic regression models, especially when propensity score models cannot be used. Despite better performance of disease risk score methods than logistic regression and propensity score models in small events per coefficient settings, bias, and coverage still deviated from nominal.

  12. Calibration by Hydrological Response Unit of a National Hydrologic Model to Improve Spatial Representation and Distribution of Parameters

    Science.gov (United States)

    Norton, P. A., II

    2015-12-01

    The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.

  13. A Proportional Hazards Regression Model for the Subdistribution with Covariates-adjusted Censoring Weight for Competing Risks Data

    DEFF Research Database (Denmark)

    He, Peng; Eriksson, Frank; Scheike, Thomas H.

    2016-01-01

    and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight......With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution...... function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight...

  14. Performance Analysis of Different NeQuick Ionospheric Model Parameters

    Directory of Open Access Journals (Sweden)

    WANG Ningbo

    2017-04-01

    Full Text Available Galileo adopts NeQuick model for single-frequency ionospheric delay corrections. For the standard operation of Galileo, NeQuick model is driven by the effective ionization level parameter Az instead of the solar activity level index, and the three broadcast ionospheric coefficients are determined by a second-polynomial through fitting the Az values estimated from globally distributed Galileo Sensor Stations (GSS. In this study, the processing strategies for the estimation of NeQuick ionospheric coefficients are discussed and the characteristics of the NeQuick coefficients are also analyzed. The accuracy of Global Position System (GPS broadcast Klobuchar, original NeQuick2 and fitted NeQuickC as well as Galileo broadcast NeQuickG models is evaluated over the continental and oceanic regions, respectively, in comparison with the ionospheric total electron content (TEC provided by global ionospheric maps (GIM, GPS test stations and JASON-2 altimeter. The results show that NeQuickG can mitigate ionospheric delay by 54.2%~65.8% on a global scale, and NeQuickC can correct for 71.1%~74.2% of the ionospheric delay. NeQuick2 performs at the same level with NeQuickG, which is a bit better than that of GPS broadcast Klobuchar model.

  15. Exploring parameter constraints on quintessential dark energy: The exponential model

    International Nuclear Information System (INIS)

    Bozek, Brandon; Abrahamse, Augusta; Albrecht, Andreas; Barnard, Michael

    2008-01-01

    We present an analysis of a scalar field model of dark energy with an exponential potential using the Dark Energy Task Force (DETF) simulated data models. Using Markov Chain Monte Carlo sampling techniques we examine the ability of each simulated data set to constrain the parameter space of the exponential potential for data sets based on a cosmological constant and a specific exponential scalar field model. We compare our results with the constraining power calculated by the DETF using their 'w 0 -w a ' parametrization of the dark energy. We find that respective increases in constraining power from one stage to the next produced by our analysis give results consistent with DETF results. To further investigate the potential impact of future experiments, we also generate simulated data for an exponential model background cosmology which cannot be distinguished from a cosmological constant at DETF 'Stage 2', and show that for this cosmology good DETF Stage 4 data would exclude a cosmological constant by better than 3σ

  16. Short term load forecasting technique based on the seasonal exponential adjustment method and the regression model

    International Nuclear Information System (INIS)

    Wu, Jie; Wang, Jianzhou; Lu, Haiyan; Dong, Yao; Lu, Xiaoxiao

    2013-01-01

    Highlights: ► The seasonal and trend items of the data series are forecasted separately. ► Seasonal item in the data series is verified by the Kendall τ correlation testing. ► Different regression models are applied to the trend item forecasting. ► We examine the superiority of the combined models by the quartile value comparison. ► Paired-sample T test is utilized to confirm the superiority of the combined models. - Abstract: For an energy-limited economy system, it is crucial to forecast load demand accurately. This paper devotes to 1-week-ahead daily load forecasting approach in which load demand series are predicted by employing the information of days before being similar to that of the forecast day. As well as in many nonlinear systems, seasonal item and trend item are coexisting in load demand datasets. In this paper, the existing of the seasonal item in the load demand data series is firstly verified according to the Kendall τ correlation testing method. Then in the belief of the separate forecasting to the seasonal item and the trend item would improve the forecasting accuracy, hybrid models by combining seasonal exponential adjustment method (SEAM) with the regression methods are proposed in this paper, where SEAM and the regression models are employed to seasonal and trend items forecasting respectively. Comparisons of the quartile values as well as the mean absolute percentage error values demonstrate this forecasting technique can significantly improve the accuracy though models applied to the trend item forecasting are eleven different ones. This superior performance of this separate forecasting technique is further confirmed by the paired-sample T tests

  17. Modeling and identification for the adjustable control of generation processes; Modelado e identificacion para el control autoajustable de procesos de generacion

    Energy Technology Data Exchange (ETDEWEB)

    Ricano Castillo, Juan Manuel; Palomares Gonzalez, Daniel [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)

    1989-12-31

    The recursive technique of the method of minimum squares is employed to obtain a multivariable model of the self regressive mobile mean type, needed for the design of a multivariable, self-adjustable controller self adjustable multivariable. In this article the employed technique and the results obtained are described with the characterization of the model structure and the parametric estimation. The convergency velocity curves are observed towards the parameters` numerical values. [Espanol] La tecnica recursiva del metodo de los minimos cuadrados se emplea para obtener un modelo multivariable de tipo autorregresivo de promedio movil, necesario para el diseno de un controlador autoajustable muitivariable. En el articulo, se describe la tecnica empleada y los resultados obtenidos con la caracterizacion de la estructura del modelo y la estimacion parametrica. Se observan las curvas de la velocidad de convergencia hacia los valores numericos de los parametros.

  18. Application of multi-parameter chorus and plasmaspheric hiss wave models in radiation belt modeling

    Science.gov (United States)

    Aryan, H.; Kang, S. B.; Balikhin, M. A.; Fok, M. C. H.; Agapitov, O. V.; Komar, C. M.; Kanekal, S. G.; Nagai, T.; Sibeck, D. G.

    2017-12-01

    Numerical simulation studies of the Earth's radiation belts are important to understand the acceleration and loss of energetic electrons. The Comprehensive Inner Magnetosphere-Ionosphere (CIMI) model along with many other radiation belt models require inputs for pitch angle, energy, and cross diffusion of electrons, due to chorus and plasmaspheric hiss waves. These parameters are calculated using statistical wave distribution models of chorus and plasmaspheric hiss amplitudes. In this study we incorporate recently developed multi-parameter chorus and plasmaspheric hiss wave models based on geomagnetic index and solar wind parameters. We perform CIMI simulations for two geomagnetic storms and compare the flux enhancement of MeV electrons with data from the Van Allen Probes and Akebono satellites. We show that the relativistic electron fluxes calculated with multi-parameter wave models resembles the observations more accurately than the relativistic electron fluxes calculated with single-parameter wave models. This indicates that wave models based on a combination of geomagnetic index and solar wind parameters are more effective as inputs to radiation belt models.

  19. Parameters-related uncertainty in modeling sugar cane yield with an agro-Land Surface Model

    Science.gov (United States)

    Valade, A.; Ciais, P.; Vuichard, N.; Viovy, N.; Ruget, F.; Gabrielle, B.

    2012-12-01

    Agro-Land Surface Models (agro-LSM) have been developed from the coupling of specific crop models and large-scale generic vegetation models. They aim at accounting for the spatial distribution and variability of energy, water and carbon fluxes within soil-vegetation-atmosphere continuum with a particular emphasis on how crop phenology and agricultural management practice influence the turbulent fluxes exchanged with the atmosphere, and the underlying water and carbon pools. A part of the uncertainty in these models is related to the many parameters included in the models' equations. In this study, we quantify the parameter-based uncertainty in the simulation of sugar cane biomass production with the agro-LSM ORCHIDEE-STICS on a multi-regional approach with data from sites in Australia, La Reunion and Brazil. First, the main source of uncertainty for the output variables NPP, GPP, and sensible heat flux (SH) is determined through a screening of the main parameters of the model on a multi-site basis leading to the selection of a subset of most sensitive parameters causing most of the uncertainty. In a second step, a sensitivity analysis is carried out on the parameters selected from the screening analysis at a regional scale. For this, a Monte-Carlo sampling method associated with the calculation of Partial Ranked Correlation Coefficients is used. First, we quantify the sensitivity of the output variables to individual input parameters on a regional scale for two regions of intensive sugar cane cultivation in Australia and Brazil. Then, we quantify the overall uncertainty in the simulation's outputs propagated from the uncertainty in the input parameters. Seven parameters are identified by the screening procedure as driving most of the uncertainty in the agro-LSM ORCHIDEE-STICS model output at all sites. These parameters control photosynthesis (optimal temperature of photosynthesis, optimal carboxylation rate), radiation interception (extinction coefficient), root

  20. Ice loading model for Glacial Isostatic Adjustment in the Barents Sea constrained by GRACE gravity observations

    Science.gov (United States)

    Root, Bart; Tarasov, Lev; van der Wal, Wouter

    2014-05-01

    The global ice budget is still under discussion because the observed 120-130 m eustatic sea level equivalent since the Last Glacial Maximum (LGM) can not be explained by the current knowledge of land-ice melt after the LGM. One possible location for the missing ice is the Barents Sea Region, which was completely covered with ice during the LGM. This is deduced from relative sea level observations on Svalbard, Novaya Zemlya and the North coast of Scandinavia. However, there are no observations in the middle of the Barents Sea that capture the post-glacial uplift. With increased precision and longer time series of monthly gravity observations of the GRACE satellite mission it is possible to constrain Glacial Isostatic Adjustment in the center of the Barents Sea. This study investigates the extra constraint provided by GRACE data for modeling the past ice geometry in the Barents Sea. We use CSR release 5 data from February 2003 to July 2013. The GRACE data is corrected for the past 10 years of secular decline of glacier ice on Svalbard, Novaya Zemlya and Frans Joseph Land. With numerical GIA models for a radially symmetric Earth, we model the expected gravity changes and compare these with the GRACE observations after smoothing with a 250 km Gaussian filter. The comparisons show that for the viscosity profile VM5a, ICE-5G has too strong a gravity signal compared to GRACE. The regional calibrated ice sheet model (GLAC) of Tarasov appears to fit the amplitude of the GRACE signal. However, the GRACE data are very sensitive to the ice-melt correction, especially for Novaya Zemlya. Furthermore, the ice mass should be more concentrated to the middle of the Barents Sea. Alternative viscosity models confirm these conclusions.

  1. The electronic disability record: purpose, parameters, and model use case.

    Science.gov (United States)

    Tulu, Bengisu; Horan, Thomas A

    2009-01-01

    The active engagement of consumers is an important factor in achieving widespread success of health information systems. The disability community represents a major segment of the healthcare arena, with more than 50 million Americans experiencing some form of disability. In keeping with the "consumer-driven" approach to e-health systems, this paper considers the distinctive aspects of electronic and personal health record use by this segment of society. Drawing upon the information shared during two national policy forums on this topic, the authors present the concept of Electronic Disability Records (EDR). The authors outline the purpose and parameters of such records, with specific attention to its ability to organize health and financial data in a manner that can be used to expedite the disability determination process. In doing so, the authors discuss its interaction with Electronic Health Records (EHR) and Personal Health Records (PHR). The authors then draw upon these general parameters to outline a model use case for disability determination and discuss related implications for disability health management. The paper further reports on the subsequent considerations of these and related deliberations by the American Health Information Community (AHIC).

  2. Set up of a method for the adjustment of resonance parameters on integral experiments; Mise au point d`une methode d`ajustement des parametres de resonance sur des experiences integrales

    Energy Technology Data Exchange (ETDEWEB)

    Blaise, P.

    1996-12-18

    Resonance parameters for actinides play a significant role in the neutronic characteristics of all reactor types. All the major integral parameters strongly depend on the nuclear data of the isotopes in the resonance-energy regions.The author sets up a method for the adjustment of resonance parameters taking into account the self-shielding effects and restricting the cross section deconvolution problem to a limited energy region. (N.T.).

  3. The S-parameter in Holographic Technicolor Models

    CERN Document Server

    Agashe, Kaustubh; Grojean, Christophe; Reece, Matthew

    2007-01-01

    We study the S parameter, considering especially its sign, in models of electroweak symmetry breaking (EWSB) in extra dimensions, with fermions localized near the UV brane. Such models are conjectured to be dual to 4D strong dynamics triggering EWSB. The motivation for such a study is that a negative value of S can significantly ameliorate the constraints from electroweak precision data on these models, allowing lower mass scales (TeV or below) for the new particles and leading to easier discovery at the LHC. We first extend an earlier proof of S>0 for EWSB by boundary conditions in arbitrary metric to the case of general kinetic functions for the gauge fields or arbitrary kinetic mixing. We then consider EWSB in the bulk by a Higgs VEV showing that S is positive for arbitrary metric and Higgs profile, assuming that the effects from higher-dimensional operators in the 5D theory are sub-leading and can therefore be neglected. For the specific case of AdS_5 with a power law Higgs profile, we also show that S ~ ...

  4. Extracting Structure Parameters of Dimers for Molecular Tunneling Ionization Model

    Science.gov (United States)

    Zhao, Song-Feng; Huang, Fang; Wang, Guo-Li; Zhou, Xiao-Xin

    2016-03-01

    We determine structure parameters of the highest occupied molecular orbital (HOMO) of 27 dimers for the molecular tunneling ionization (so called MO-ADK) model of Tong et al. [Phys. Rev. A 66 (2002) 033402]. The molecular wave functions with correct asymptotic behavior are obtained by solving the time-independent Schrödinger equation with B-spline functions and molecular potentials which are numerically created using the density functional theory. We examine the alignment-dependent tunneling ionization probabilities from MO-ADK model for several molecules by comparing with the molecular strong-field approximation (MO-SFA) calculations. We show the molecular Perelomov–Popov–Terent'ev (MO-PPT) can successfully give the laser wavelength dependence of ionization rates (or probabilities). Based on the MO-PPT model, two diatomic molecules having valence orbital with antibonding systems (i.e., Cl2, Ne2) show strong ionization suppression when compared with their corresponding closest companion atoms. Supported by National Natural Science Foundation of China under Grant Nos. 11164025, 11264036, 11465016, 11364038, the Specialized Research Fund for the Doctoral Program of Higher Education of China under Grant No. 20116203120001, and the Basic Scientific Research Foundation for Institution of Higher Learning of Gansu Province

  5. Sound propagation and absorption in foam - A distributed parameter model.

    Science.gov (United States)

    Manson, L.; Lieberman, S.

    1971-01-01

    Liquid-base foams are highly effective sound absorbers. A better understanding of the mechanisms of sound absorption in foams was sought by exploration of a mathematical model of bubble pulsation and coupling and the development of a distributed-parameter mechanical analog. A solution by electric-circuit analogy was thus obtained and transmission-line theory was used to relate the physical properties of the foams to the characteristic impedance and propagation constants of the analog transmission line. Comparison of measured physical properties of the foam with values obtained from measured acoustic impedance and propagation constants and the transmission-line theory showed good agreement. We may therefore conclude that the sound propagation and absorption mechanisms in foam are accurately described by the resonant response of individual bubbles coupled to neighboring bubbles.

  6. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  7. Coupled 1D-2D hydrodynamic inundation model for sewer overflow: Influence of modeling parameters

    Directory of Open Access Journals (Sweden)

    Adeniyi Ganiyu Adeogun

    2015-10-01

    Full Text Available This paper presents outcome of our investigation on the influence of modeling parameters on 1D-2D hydrodynamic inundation model for sewer overflow, developed through coupling of an existing 1D sewer network model (SWMM and 2D inundation model (BREZO. The 1D-2D hydrodynamic model was developed for the purpose of examining flood incidence due to surcharged water on overland surface. The investigation was carried out by performing sensitivity analysis on the developed model. For the sensitivity analysis, modeling parameters, such as mesh resolution Digital Elevation Model (DEM resolution and roughness were considered. The outcome of the study shows the model is sensitive to changes in these parameters. The performance of the model is significantly influenced, by the Manning's friction value, the DEM resolution and the area of the triangular mesh. Also, changes in the aforementioned modeling parameters influence the Flood characteristics, such as the inundation extent, the flow depth and the velocity across the model domain.

  8. Modeling the outflow of liquid with initial supercritical parameters using the relaxation model for condensation

    Directory of Open Access Journals (Sweden)

    Lezhnin Sergey

    2017-01-01

    Full Text Available The two-temperature model of the outflow from a vessel with initial supercritical parameters of medium has been realized. The model uses thermodynamic non-equilibrium relaxation approach to describe phase transitions. Based on a new asymptotic model for computing the relaxation time, the outflow of water with supercritical initial pressure and super- and subcritical temperatures has been calculated.

  9. Experimental parameters differentially affect the humoral response of the cholera-toxin-based murine model of food allergy

    DEFF Research Database (Denmark)

    Kroghsbo, S.; Christensen, Hanne Risager; Frøkiær, Hanne

    2003-01-01

    Background: Recent studies have developed a murine model of IgE-mediated food allergy based on oral coadministration of antigen and cholera toxin (CT) to establish a maximal response for studying immunopathogenic mechanisms and immunotherapeutic strategies. However, for studying subtle...... interested in characterizing the individual effects of the parameters in the CT-based model: CT dose, antigen type and dose, and number of immunizations. Methods: BALB/c mice were orally sensitized weekly for 3 or 7 weeks with graded doses of CT and various food antigens (soy-trypsin inhibitor, ovalbumin...... of the antibody response depended on the type of antigen and number of immunizations. Conclusions: The critical parameters of the CT-based murine allergy model differentially control the intensity and kinetics of the developing immune response. Adjustment of these parameters could be a key tool for tailoring...

  10. [Construction and validation of a multidimensional model of students' adjustment to college context].

    Science.gov (United States)

    Soares, Ana Paula; Guisande, M Adelina; Diniz, António M; Almeida, Leandro S

    2006-05-01

    This article presents a model of interaction of personal and contextual variables in the prediction of academic performance and psychosocial development of Portuguese college students. The sample consists of 560 first-year college students of the University of Minho. The path analysis results suggest that initial expectations of the students' involvement in academic life constituted an effective predictor of their involvement during their first year; as well as the social climate of the classroom influenced their involvement, well-being and levels of satisfaction obtained. However, these relationships were not strong enough to influence the criterion variables integrated in the model (academic performance and psychosocial development). Academic performance was predicted by the high school grades and college entrance examination scores, and the level of psychosocial development was determined by the level of development showed at the time they entered college. Though more research is needed, these results point to the importance of students' pre-college characteristics when we are considering the quality of their college adjustment process.

  11. Positive Adjustment Among American Repatriated Prisoners of the Vietnam War: Modeling the Long-Term Effects of Captivity.

    Science.gov (United States)

    King, Daniel W; King, Lynda A; Park, Crystal L; Lee, Lewina O; Kaiser, Anica Pless; Spiro, Avron; Moore, Jeffrey L; Kaloupek, Danny G; Keane, Terence M

    2015-11-01

    A longitudinal lifespan model of factors contributing to later-life positive adjustment was tested on 567 American repatriated prisoners from the Vietnam War. This model encompassed demographics at time of capture and attributes assessed after return to the U.S. (reports of torture and mental distress) and approximately 3 decades later (later-life stressors, perceived social support, positive appraisal of military experiences, and positive adjustment). Age and education at time of capture and physical torture were associated with repatriation mental distress, which directly predicted poorer adjustment 30 years later. Physical torture also had a salutary effect, enhancing later-life positive appraisals of military experiences. Later-life events were directly and indirectly (through concerns about retirement) associated with positive adjustment. Results suggest that the personal resources of older age and more education and early-life adverse experiences can have cascading effects over the lifespan to impact well-being in both positive and negative ways.

  12. Hydrological modeling in alpine catchments: sensing the critical parameters towards an efficient model calibration.

    Science.gov (United States)

    Achleitner, S; Rinderer, M; Kirnbauer, R

    2009-01-01

    For the Tyrolean part of the river Inn, a hybrid model for flood forecast has been set up and is currently in its test phase. The system is a hybrid system which comprises of a hydraulic 1D model for the river Inn, and the hydrological models HQsim (Rainfall-runoff-discharge model) and the snow and ice melt model SES for modeling the rainfall runoff form non-glaciated and glaciated tributary catchment respectively. Within this paper the focus is put on the hydrological modeling of the totally 49 connected non-glaciated catchments realized with the software HQsim. In the course of model calibration, the identification of the most sensitive parameters is important aiming at an efficient calibration procedure. The indicators used for explaining the parameter sensitivities were chosen specifically for the purpose of flood forecasting. Finally five model parameters could be identified as being sensitive for model calibration when aiming for a well calibrated model for flood conditions. In addition two parameters were identified which are sensitive in situations where the snow line plays an important role.

  13. Misspecification in Latent Change Score Models: Consequences for Parameter Estimation, Model Evaluation, and Predicting Change.

    Science.gov (United States)

    Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P

    2018-01-01

    Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.

  14. Physical property parameter set for modeling ICPP aqueous wastes with ASPEN electrolyte NRTL model

    International Nuclear Information System (INIS)

    Schindler, R.E.

    1996-09-01

    The aqueous waste evaporators at the Idaho Chemical Processing Plant (ICPP) are being modeled using ASPEN software. The ASPEN software calculates chemical and vapor-liquid equilibria with activity coefficients calculated using the electrolyte Non-Random Two Liquid (NRTL) model for local excess Gibbs free energies of interactions between ions and molecules in solution. The use of the electrolyte NRTL model requires the determination of empirical parameters for the excess Gibbs free energies of the interactions between species in solution. This report covers the development of a set parameters, from literature data, for the use of the electrolyte NRTL model with the major solutes in the ICPP aqueous wastes

  15. Testing for parameter instability across different modeling frameworks

    NARCIS (Netherlands)

    Calvori, Francesco; Creal, Drew; Koopman, Siem Jan; Lucas, André

    2017-01-01

    We develop a new parameter instability test that generalizes the seminal ARCHLagrange Multiplier test of Engle (1982) for a constant variance against the alternative of autoregressive conditional heteroskedasticity to settings with nonlinear timevarying parameters and non-Gaussian distributions. We

  16. Nonlinear relative-proportion-based route adjustment process for day-to-day traffic dynamics: modeling, equilibrium and stability analysis

    Science.gov (United States)

    Zhu, Wenlong; Ma, Shoufeng; Tian, Junfang; Li, Geng

    2016-11-01

    Travelers' route adjustment behaviors in a congested road traffic network are acknowledged as a dynamic game process between them. Existing Proportional-Switch Adjustment Process (PSAP) models have been extensively investigated to characterize travelers' route choice behaviors; PSAP has concise structure and intuitive behavior rule. Unfortunately most of which have some limitations, i.e., the flow over adjustment problem for the discrete PSAP model, the absolute cost differences route adjustment problem, etc. This paper proposes a relative-Proportion-based Route Adjustment Process (rePRAP) maintains the advantages of PSAP and overcomes these limitations. The rePRAP describes the situation that travelers on higher cost route switch to those with lower cost at the rate that is unilaterally depended on the relative cost differences between higher cost route and its alternatives. It is verified to be consistent with the principle of the rational behavior adjustment process. The equivalence among user equilibrium, stationary path flow pattern and stationary link flow pattern is established, which can be applied to judge whether a given network traffic flow has reached UE or not by detecting the stationary or non-stationary state of link flow pattern. The stability theorem is proved by the Lyapunov function approach. A simple example is tested to demonstrate the effectiveness of the rePRAP model.

  17. Models of traumatic experiences and children's psychological adjustment: the roles of perceived parenting and the children's own resources and activity.

    Science.gov (United States)

    Punamäki, R L; Qouta, S; el Sarraj, E

    1997-08-01

    The relations between traumatic events, perceived parenting styles, children's resources, political activity, and psychological adjustment were examined among 108 Palestinian boys and girls of 11-12 years of age. The results showed that exposure to traumatic events increased psychological adjustment problems directly and via 2 mediating paths. First, the more traumatic events children had experienced, the more negative parenting they experienced. And, the poorer they perceived parenting, the more they suffered from high neuroticism and low self-esteem. Second, the more traumatic events children had experienced, the more political activity they showed, and the more active they were, the more they suffered from psychological adjustment problems. Good perceived parenting protected children's psychological adjustment by making them less vulnerable in two ways. First, traumatic events decreased their intellectual, creative, and cognitive resources, and a lack of resources predicted many psychological adjustment problems in a model excluding perceived parenting. Second, political activity increased psychological adjustment problems in the same model, but not in the model including good parenting.

  18. Kriging modeling and SPSA adjusting PID with KPWF compensator control of IPMC gripper for mm-sized objects

    Science.gov (United States)

    Chen, Yang; Hao, Lina; Yang, Hui; Gao, Jinhai

    2017-12-01

    Ionic polymer metal composite (IPMC) as a new smart material has been widely concerned in the micromanipulation field. In this paper, a novel two-finger gripper which contains an IPMC actuator and an ultrasensitive force sensor is proposed and fabricated. The IPMC as one finger of the gripper for mm-sized objects can achieve gripping and releasing motion, and the other finger works not only as a support finger but also as a force sensor. Because of the feedback signal of the force sensor, this integrated actuating and sensing gripper can complete gripping miniature objects in millimeter scale. The Kriging model is used to describe nonlinear characteristics of the IPMC for the first time, and then the control scheme called simultaneous perturbation stochastic approximation adjusting a proportion integration differentiation parameter controller with a Kriging predictor wavelet filter compensator is applied to track the gripping force of the gripper. The high precision force tracking in the foam ball manipulation process is obtained on a semi-physical experimental platform, which demonstrates that this gripper for mm-sized objects can work well in manipulation applications.

  19. Statistical osteoporosis models using composite finite elements: a parameter study.

    Science.gov (United States)

    Wolfram, Uwe; Schwen, Lars Ole; Simon, Ulrich; Rumpf, Martin; Wilke, Hans-Joachim

    2009-09-18

    Osteoporosis is a widely spread disease with severe consequences for patients and high costs for health care systems. The disease is characterised by a loss of bone mass which induces a loss of mechanical performance and structural integrity. It was found that transverse trabeculae are thinned and perforated while vertical trabeculae stay intact. For understanding these phenomena and the mechanisms leading to fractures of trabecular bone due to osteoporosis, numerous researchers employ micro-finite element models. To avoid disadvantages in setting up classical finite element models, composite finite elements (CFE) can be used. The aim of the study is to test the potential of CFE. For that, a parameter study on numerical lattice samples with statistically simulated, simplified osteoporosis is performed. These samples are subjected to compression and shear loading. Results show that the biggest drop of compressive stiffness is reached for transverse isotropic structures losing 32% of the trabeculae (minus 89.8% stiffness). The biggest drop in shear stiffness is found for an isotropic structure also losing 32% of the trabeculae (minus 67.3% stiffness). The study indicates that losing trabeculae leads to a worse drop of macroscopic stiffness than thinning of trabeculae. The results further demonstrate the advantages of CFEs for simulating micro-structured samples.

  20. Recalibrating disease parameters for increasing realism in modeling epidemics in closed settings.

    Science.gov (United States)

    Bioglio, Livio; Génois, Mathieu; Vestergaard, Christian L; Poletto, Chiara; Barrat, Alain; Colizza, Vittoria

    2016-11-14

    The homogeneous mixing assumption is widely adopted in epidemic modelling for its parsimony and represents the building block of more complex approaches, including very detailed agent-based models. The latter assume homogeneous mixing within schools, workplaces and households, mostly for the lack of detailed information on human contact behaviour within these settings. The recent data availability on high-resolution face-to-face interactions makes it now possible to assess the goodness of this simplified scheme in reproducing relevant aspects of the infection dynamics. We consider empirical contact networks gathered in different contexts, as well as synthetic data obtained through realistic models of contacts in structured populations. We perform stochastic spreading simulations on these contact networks and in populations of the same size under a homogeneous mixing hypothesis. We adjust the epidemiological parameters of the latter in order to fit the prevalence curve of the contact epidemic model. We quantify the agreement by comparing epidemic peak times, peak values, and epidemic sizes. Good approximations of the peak times and peak values are obtained with the homogeneous mixing approach, with a median relative difference smaller than 20 % in all cases investigated. Accuracy in reproducing the peak time depends on the setting under study, while for the peak value it is independent of the setting. Recalibration is found to be linear in the epidemic parameters used in the contact data simulations, showing changes across empirical settings but robustness across groups and population sizes. An adequate rescaling of the epidemiological parameters can yield a good agreement between the epidemic curves obtained with a real contact network and a homogeneous mixing approach in a population of the same size. The use of such recalibrated homogeneous mixing approximations would enhance the accuracy and realism of agent-based simulations and limit the intrinsic biases of

  1. A rainfall disaggregation scheme for sub-hourly time scales: Coupling a Bartlett-Lewis based model with adjusting procedures

    Science.gov (United States)

    Kossieris, Panagiotis; Makropoulos, Christos; Onof, Christian; Koutsoyiannis, Demetris

    2018-01-01

    Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions.

  2. Effect of Process Parameter in Laser Cutting of PMMA Sheet and ANFIS Modelling for Online Control

    Directory of Open Access Journals (Sweden)

    Hossain Anamul

    2016-01-01

    Full Text Available Laser beam machining (LBM is a promising and high accuracy machining technology in advanced manufacturing process. In LBM, crucial machining qualities of the end product include heat affected zone, surface roughness, kerf width, thermal stress, taper angle etc. It is essential for industrial applications especially in laser cutting of thermoplastics to acquire output product with minimum kerf width. The kerf width is dependent on laser input parameters such as laser power, cutting speed, standoff distance, assist gas pressure etc. However it is difficult to get a functional relationship due to the high uncertainty among these parameters. Hence, total 81 sets of full factorial experiment were conducted, representing four input parameters with three different levels. The experiments were performed by a continuous wave (CW CO2 laser with the mode structure of TEM01 named Zech laser machine that can provide maximum laser power up to 500 W. The polymethylmethacrylate (PMMA sheet with thickness of 3.0 mm was used for this experiment. Laser power, cutting speed, standoff distance and assist gas pressure were used as input parameters for the output named kerf width. Standoff distance, laser power, cutting speed and assist gas pressure have the dominant effect on kerf width, respectively, although assist gas has some significant effect to remove the harmful gas. ANFIS model has been developed for online control purposes. This research is considered important and helpful for manufacturing engineers in adjusting and decision making of the process parameters in laser manufacturing industry of PMMA thermoplastics with desired minimum kerf width as well as intricate shape design purposes.

  3. Assigning probability distributions to input parameters of performance assessment models

    International Nuclear Information System (INIS)

    Mishra, Srikanta

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available

  4. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  5. MATHEMATICAL MODELING OF FLOW PARAMETERS FOR SINGLE WIND TURBINE

    Directory of Open Access Journals (Sweden)

    2016-01-01

    Full Text Available It is known that on the territory of the Russian Federation the construction of several large wind farms is planned. The tasks connected with design and efficiency evaluation of wind farm work are in demand today. One of the possible directions in design is connected with mathematical modeling. The method of large eddy simulation developed within the direction of computational hydrodynamics allows to reproduce unsteady structure of the flow in details and to determine various integrated values. The calculation of work for single wind turbine installation by means of large eddy simulation and Actuator Line Method along the turbine blade is given in this work. For problem definition the numerical method in the form of a box was considered and the adapted unstructured grid was used.The mathematical model included the main equations of continuity and momentum equations for incompressible fluid. The large-scale vortex structures were calculated by means of integration of the filtered equations. The calculation was carried out with Smagorinsky model for determination of subgrid scale turbulent viscosity. The geometrical parametersof wind turbine were set proceeding from open sources in the Internet.All physical values were defined at center of computational cell. The approximation of items in equations was ex- ecuted with the second order of accuracy for time and space. The equations for coupling velocity and pressure were solved by means of iterative algorithm PIMPLE. The total quantity of the calculated physical values on each time step was equal to 18. So, the resources of a high performance cluster were required.As a result of flow calculation in wake for the three-bladed turbine average and instantaneous values of velocity, pressure, subgrid kinetic energy and turbulent viscosity, components of subgrid stress tensor were worked out. The re- ceived results matched the known results of experiments and numerical simulation, testify the opportunity

  6. Assessing climate change effects on long-term forest development: adjusting growth, phenology, and seed production in a gap model

    NARCIS (Netherlands)

    Meer, van der P.J.; Jorritsma, I.T.M.; Kramer, K.

    2002-01-01

    The sensitivity of forest development to climate change is assessed using a gap model. Process descriptions in the gap model of growth, phenology, and seed production were adjusted for climate change effects using a detailed process-based growth modeland a regression analysis. Simulation runs over

  7. GEMSFITS: Code package for optimization of geochemical model parameters and inverse modeling

    International Nuclear Information System (INIS)

    Miron, George D.; Kulik, Dmitrii A.; Dmytrieva, Svitlana V.; Wagner, Thomas

    2015-01-01

    Highlights: • Tool for generating consistent parameters against various types of experiments. • Handles a large number of experimental data and parameters (is parallelized). • Has a graphical interface and can perform statistical analysis on the parameters. • Tested on fitting the standard state Gibbs free energies of aqueous Al species. • Example on fitting interaction parameters of mixing models and thermobarometry. - Abstract: GEMSFITS is a new code package for fitting internally consistent input parameters of GEM (Gibbs Energy Minimization) geochemical–thermodynamic models against various types of experimental or geochemical data, and for performing inverse modeling tasks. It consists of the gemsfit2 (parameter optimizer) and gfshell2 (graphical user interface) programs both accessing a NoSQL database, all developed with flexibility, generality, efficiency, and user friendliness in mind. The parameter optimizer gemsfit2 includes the GEMS3K chemical speciation solver ( (http://gems.web.psi.ch/GEMS3K)), which features a comprehensive suite of non-ideal activity- and equation-of-state models of solution phases (aqueous electrolyte, gas and fluid mixtures, solid solutions, (ad)sorption. The gemsfit2 code uses the robust open-source NLopt library for parameter fitting, which provides a selection between several nonlinear optimization algorithms (global, local, gradient-based), and supports large-scale parallelization. The gemsfit2 code can also perform comprehensive statistical analysis of the fitted parameters (basic statistics, sensitivity, Monte Carlo confidence intervals), thus supporting the user with powerful tools for evaluating the quality of the fits and the physical significance of the model parameters. The gfshell2 code provides menu-driven setup of optimization options (data selection, properties to fit and their constraints, measured properties to compare with computed counterparts, and statistics). The practical utility, efficiency, and

  8. [Structural adjustment, cultural adjustment?].

    Science.gov (United States)

    Dujardin, B; Dujardin, M; Hermans, I

    2003-12-01

    Over the last two decades, multiple studies have been conducted and many articles published about Structural Adjustment Programmes (SAPs). These studies mainly describe the characteristics of SAPs and analyse their economic consequences as well as their effects upon a variety of sectors: health, education, agriculture and environment. However, very few focus on the sociological and cultural effects of SAPs. Following a summary of SAP's content and characteristics, the paper briefly discusses the historical course of SAPs and the different critiques which have been made. The cultural consequences of SAPs are introduced and are described on four different levels: political, community, familial, and individual. These levels are analysed through examples from the literature and individual testimonies from people in the Southern Hemisphere. The paper concludes that SAPs, alongside economic globalisation processes, are responsible for an acute breakdown of social and cultural structures in societies in the South. It should be a priority, not only to better understand the situation and its determining factors, but also to intervene and act with strategies that support and reinvest in the social and cultural sectors, which is vital in order to allow for individuals and communities in the South to strengthen their autonomy and identify.

  9. Age-related change in Wechsler IQ norms after adjustment for the Flynn effect: estimates from three computational models.

    Science.gov (United States)

    Agbayani, Kristina A; Hiscock, Merrill

    2013-01-01

    A previous study found that the Flynn effect accounts for 85% of the normative difference between 20- and 70-year-olds on subtests of the Wechsler intelligence tests. Adjusting scores for the Flynn effect substantially reduces normative age-group differences, but the appropriate amount of adjustment is uncertain. The present study replicates previous findings and employs two other methods of adjusting for the Flynn effect. Averaged across models, results indicate that the Flynn effect accounts for 76% of normative age-group differences on Wechsler IQ subtests. Flynn-effect adjustment reduces the normative age-related decline in IQ from 4.3 to 1.1 IQ points per decade.

  10. House thermal model parameter estimation method for Model Predictive Control applications

    NARCIS (Netherlands)

    van Leeuwen, Richard Pieter; de Wit, J.B.; Fink, J.; Smit, Gerardus Johannes Maria

    In this paper we investigate thermal network models with different model orders applied to various Dutch low-energy house types with high and low interior thermal mass and containing floor heating. Parameter estimations are performed by using data from TRNSYS simulations. The paper discusses results

  11. Model-Based Material Parameter Estimation for Terahertz Reflection Spectroscopy

    Science.gov (United States)

    Kniffin, Gabriel Paul

    Many materials such as drugs and explosives have characteristic spectral signatures in the terahertz (THz) band. These unique signatures imply great promise for spectral detection and classification using THz radiation. While such spectral features are most easily observed in transmission, real-life imaging systems will need to identify materials of interest from reflection measurements, often in non-ideal geometries. One important, yet commonly overlooked source of signal corruption is the etalon effect -- interference phenomena caused by multiple reflections from dielectric layers of packaging and clothing likely to be concealing materials of interest in real-life scenarios. This thesis focuses on the development and implementation of a model-based material parameter estimation technique, primarily for use in reflection spectroscopy, that takes the influence of the etalon effect into account. The technique is adapted from techniques developed for transmission spectroscopy of thin samples and is demonstrated using measured data taken at the Northwest Electromagnetic Research Laboratory (NEAR-Lab) at Portland State University. Further tests are conducted, demonstrating the technique's robustness against measurement noise and common sources of error.

  12. Geomagnetically induced currents in Uruguay: Sensitivity to modelling parameters

    Science.gov (United States)

    Caraballo, R.

    2016-11-01

    According to the traditional wisdom, geomagnetically induced currents (GIC) should occur rarely at mid-to-low latitudes, but in the last decades a growing number of reports have addressed their effects on high-voltage (HV) power grids at mid-to-low latitudes. The growing trend to interconnect national power grids to meet regional integration objectives, may lead to an increase in the size of the present energy transmission networks to form a sort of super-grid at continental scale. Such a broad and heterogeneous super-grid can be exposed to the effects of large GIC if appropriate mitigation actions are not taken into consideration. In the present study, we present GIC estimates for the Uruguayan HV power grid during severe magnetic storm conditions. The GIC intensities are strongly dependent on the rate of variation of the geomagnetic field, conductivity of the ground, power grid resistances and configuration. Calculated GIC are analysed as functions of these parameters. The results show a reasonable agreement with measured data in Brazil and Argentina, thus confirming the reliability of the model. The expansion of the grid leads to a strong increase in GIC intensities in almost all substations. The power grid response to changes in ground conductivity and resistances shows similar results in a minor extent. This leads us to consider GIC as a non-negligible phenomenon in South America. Consequently, GIC must be taken into account in mid-to-low latitude power grids as well.

  13. Parameter and State Estimator for State Space Models

    Directory of Open Access Journals (Sweden)

    Ruifeng Ding

    2014-01-01

    Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.

  14. Parameter and state estimator for state space models.

    Science.gov (United States)

    Ding, Ruifeng; Zhuang, Linfan

    2014-01-01

    This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.

  15. Effect of the spray volume adjustment model on the efficiency of fungicides and residues in processing tomato

    Directory of Open Access Journals (Sweden)

    Henryk Ratajkiewicz

    2016-08-01

    Full Text Available This study compared the effects of a proportionate spray volume (PSV adjustment model and a fixed model (300 L/ha on the infestation of processing tomato with potato late blight (Phytophthora infestans (Mont. de Bary (PLB and azoxystrobin and chlorothalonil residues in fruits in three consecutive seasons. The fungicides were applied in alternating system with or without two spreader adjuvants. The proportionate spray volume adjustment model was based on the number of leaves on plants and spray volume index. The modified Quick, Easy, Cheap, Effective, Rugged, and Safe (QuEChERS method was optimized and validated for extraction of azoxystrobin and chlorothalonil residue. Gas chromatography with a nitrogen and phosphorus detector and an electron capture detector were used for the analysis of fungicides. The results showed that higher fungicidal residues were connected with lower infestation of tomato with PLB. PSV adjustment model resulted in lower infestation of tomato than the fixed model (300 L/ha when fungicides were applied at half the dose without adjuvants. Higher expected spray interception into the tomato canopy with the PSV system was recognized as the reasons of better control of PLB. The spreader adjuvants did not have positive effect on the biological efficacy of spray volume application systems. The results suggest that PSV adjustment model can be used to determine the spray volume for fungicide application for processing tomato crop.

  16. Effect of the spray volume adjustment model on the efficiency of fungicides and residues in processing tomato

    Energy Technology Data Exchange (ETDEWEB)

    Ratajkiewicz, H.; Kierzek, R.; Raczkowski, M.; Hołodyńska-Kulas, A.; Łacka, A.; Wójtowicz, A.; Wachowiak, M.

    2016-11-01

    This study compared the effects of a proportionate spray volume (PSV) adjustment model and a fixed model (300 L/ha) on the infestation of processing tomato with potato late blight (Phytophthora infestans (Mont.) de Bary) (PLB) and azoxystrobin and chlorothalonil residues in fruits in three consecutive seasons. The fungicides were applied in alternating system with or without two spreader adjuvants. The proportionate spray volume adjustment model was based on the number of leaves on plants and spray volume index. The modified Quick, Easy, Cheap, Effective, Rugged, and Safe (QuEChERS) method was optimized and validated for extraction of azoxystrobin and chlorothalonil residue. Gas chromatography with a nitrogen and phosphorus detector and an electron capture detector were used for the analysis of fungicides. The results showed that higher fungicidal residues were connected with lower infestation of tomato with PLB. PSV adjustment model resulted in lower infestation of tomato than the fixed model (300 L/ha) when fungicides were applied at half the dose without adjuvants. Higher expected spray interception into the tomato canopy with the PSV system was recognized as the reasons of better control of PLB. The spreader adjuvants did not have positive effect on the biological efficacy of spray volume application systems. The results suggest that PSV adjustment model can be used to determine the spray volume for fungicide application for processing tomato crop. (Author)

  17. Opportunities for Improving Army Modeling and Simulation Development: Making Fundamental Adjustments and Borrowing Commercial Business Practices

    National Research Council Canada - National Science Library

    Lee, John

    2000-01-01

    ...; requirements which span the conflict spectrum. The Army's current staff training simulation development process could better support all possible scenarios by making some fundamental adjustments and borrowing commercial business practices...

  18. Using Multilevel Modeling to Assess Case-Mix Adjusters in Consumer Experience Surveys in Health Care

    NARCIS (Netherlands)

    Damman, Olga C.; Stubbe, Janine H.; Hendriks, Michelle; Arah, Onyebuchi A.; Spreeuwenberg, Peter; Delnoij, Diana M. J.; Groenewegen, Peter P.

    2009-01-01

    Background: Ratings on the quality of healthcare from the consumer's perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for

  19. A new model to estimate insulin resistance via clinical parameters in adults with type 1 diabetes.

    Science.gov (United States)

    Zheng, Xueying; Huang, Bin; Luo, Sihui; Yang, Daizhi; Bao, Wei; Li, Jin; Yao, Bin; Weng, Jianping; Yan, Jinhua

    2017-05-01

    Insulin resistance (IR) is a risk factor to assess the development of micro- and macro-vascular complications in type 1 diabetes (T1D). However, diabetes management in adults with T1D is limited by the difficulty of lacking simple and reliable methods to estimate insulin resistance. The aim of this study was to develop a new model to estimate IR via clinical parameters in adults with T1D. A total of 36 adults with adulthood onset T1D (n = 20) or childhood onset T1D (n = 16) were recruited by quota sampling. After an overnight insulin infusion to stabilize the blood glucose at 5.6 to 7.8 mmol/L, they underwent a 180-minute euglycemic-hyperinsulinemic clamp. Glucose disposal rate (GDR, mg kg -1  min -1 ) was calculated by data collected from the last 30 minutes during the test. Demographic factors (age, sex, and diabetes duration) and metabolic parameters (blood pressure, glycated hemoglobin A 1c [HbA 1c ], waist to hip ratio [WHR], and lipids) were collected to evaluate insulin resistance. Then, age at diabetes onset and clinical parameters were used to develop a model to estimate lnGDR by stepwise linear regression. From the stepwise process, a best model to estimate insulin resistance was generated, including HbA 1c , diastolic blood pressure, and WHR. Age at diabetes onset did not enter any of the models. We proposed the following new model to estimate IR as in GDR for adults with T1D: lnGDR = 4.964 - 0.121 × HbA 1c (%) - 0.012 × diastolic blood pressure (mmHg) - 1.409 × WHR, (adjusted R 2  = 0.616, P Insulin resistance in adults living with T1D can be estimated using routinely collected clinical parameters. This simple model provides a potential tool for estimating IR in large-scale epidemiological studies of adults with T1D regardless of age at onset. Copyright © 2016 John Wiley & Sons, Ltd.

  20. THREE-PARAMETER CREEP DAMAGE CONSTITUTIVE MODEL AND ITS APPLICATION IN HYDRAULIC TUNNELLING

    OpenAIRE

    Luo Gang; Chen Liang

    2016-01-01

    Rock deformation is a time-dependent process, generally referred to as rheology. Especially for soft rock strata, design and construction of tunnel shall take full account of rheological properties of adjoining rocks. Based on classic three-parameter HK model (generalized Kelvin model), this paper proposes a three-parameter H-K damage model of which parameters attenuate with increase of equivalent strain, provides attenuation equation of model parameters in the first, second and third stage o...

  1. Predictive models for estimating visceral fat: The contribution from anthropometric parameters.

    Science.gov (United States)

    Pinho, Claudia Porto Sabino; Diniz, Alcides da Silva; de Arruda, Ilma Kruze Grande; Leite, Ana Paula Dornelas Leão; Petribú, Marina de Moraes Vasconcelos; Rodrigues, Isa Galvão

    2017-01-01

    Excessive adipose visceral tissue (AVT) represents an independent risk factor for cardiometabolic alterations. The search continues for a highly valid marker for estimating visceral adiposity that is a simple and low cost tool able to screen individuals who are highly at risk of being viscerally obese. The aim of this study was to develop a predictive model for estimating AVT volume using anthropometric parameters. Excessive adipose visceral tissue (AVT) represents an independent risk factor for cardiometabolic alterations. The search continues for a highly valid marker for estimating visceral adiposity that is a simple and low cost tool able to screen individuals who are highly at risk of being viscerally obese. The aim of this study was to develop a predictive model for estimating AVT volume using anthropometric parameters. A cross-sectional study involving overweight individuals whose AVT was evaluated (using computed tomography-CT), along with the following anthropometric parameters: body mass index (BMI), abdominal circumference (AC), waist-to-hip ratio (WHpR), waist-to-height ratio (WHtR), sagittal diameter (SD), conicity index (CI), neck circumference (NC), neck-to-thigh ratio (NTR), waist-to-thigh ratio (WTR), and body adiposity index (BAI). 109 individuals with an average age of 50.3±12.2 were evaluated. The predictive equation developed to estimate AVT in men was AVT = -1647.75 +2.43(AC) +594.74(WHpR) +883.40(CI) (R2 adjusted: 64.1%). For women, the model chosen was: AVT = -634.73 +1.49(Age) +8.34(SD) + 291.51(CI) + 6.92(NC) (R2 adjusted: 40.4%). The predictive ability of the equations developed in relation to AVT volume determined by CT was 66.9% and 46.2% for males and females, respectively (p<0.001). A quick and precise AVT estimate, especially for men, can be obtained using only AC, WHpR, and CI for men, and age, SD, CI, and NC for women. These equations can be used as a clinical and epidemiological tool for overweight individuals.

  2. CONSTRUCTION MODELS OF ANTROPOMETRIC AND DERMATOGLIPHIC PARAMETERS OF THE FACILITY

    Directory of Open Access Journals (Sweden)

    Novikova A.O.

    2017-12-01

    Full Text Available The scientific work is devoted to the analysis of constitutional and morpho - functional parameters of a person. The relevance of the chosen topic is substantiated, the problem of determining the functional state of a person, in particular the level of health, is analyzed. Correlation analysis of anthropometric parameters of a person and dermatoglyphic signs of a person is carried out.

  3. Behavioural Pattern of Invertibility Parameter of Arima Model ...

    African Journals Online (AJOL)

    It was deduced that behaviour of invertibility parameter πidepends on the order of autoregressive part (p), the order of integrated part (d), positive and negative values of moving average parameter (ϑ). Journal of the Nigerian Association of Mathematical Physics, Volume 19 (November, 2011), pp 591 – 606 ...

  4. A Note on the Item Information Function of the Four-Parameter Logistic Model

    Science.gov (United States)

    Magis, David

    2013-01-01

    This article focuses on four-parameter logistic (4PL) model as an extension of the usual three-parameter logistic (3PL) model with an upper asymptote possibly different from 1. For a given item with fixed item parameters, Lord derived the value of the latent ability level that maximizes the item information function under the 3PL model. The…

  5. Adjusting weather radar data to rain gauge measurements with data-driven models

    Science.gov (United States)

    Teschl, Reinhard; Randeu, Walter; Teschl, Franz

    2010-05-01

    Weather radar networks provide data with good spatial coverage and temporal resolution. Hence they are able to describe the variability of precipitation. Typical radar stations determine the rain rate for every square kilometre and make a full volume scan within about 5 minutes. A weakness however, is their often poor metering precision limiting the applicability of the radar for hydrological purposes. In contrast to rain gauges, which measure precipitation directly on the ground, the radar determines the reflectivity aloft and remote. Due to this principle, several sources of possible errors occur. Therefore improving the radar estimates of rainfall is still a vital topic in radar meteorology and hydrology. This paper presents data-driven approaches to improve radar estimates of rainfall by mapping radar reflectivity measurements Z to rain gauge data R. The analysis encompasses several input configurations and data-driven models. Reflectivity measurements at a constant altitude and the vertical profiles of reflectivity above a rain gauge are used as input parameters. The applied models are Artificial Neural Network (ANN), Model Tree (MT), and IBk a k-nearest-neighbour classifier. The relationship found between the data of a rain gauge and the reflectivity measurements is subsequently applied to another site with comparable terrain. Based on this independent dataset the performance of the data-driven models in the various input configurations is evaluated. For this study, rain gauge and radar data from the province of Styria, Austria, were available. The data sets extend over a two-year period (2001 and 2002). The available rain gauges use the tipping bucket principle with a resolution of 0.1 mm. Reflectivity measurements are obtained from the Doppler weather radar station on Mt. Zirbitzkogel (by courtesy of AustroControl GmbH). The designated radar is a high-resolution C-band weather-radar situated at an altitude of 2372 m above mean sea level. The data

  6. Modeling Complex Equilibria in ITC Experiments: Thermodynamic Parameters Estimation for a Three Binding Site Model

    Science.gov (United States)

    Le, Vu H.; Buscaglia, Robert; Chaires, Jonathan B.; Lewis, Edwin A.

    2013-01-01

    Isothermal Titration Calorimetry, ITC, is a powerful technique that can be used to estimate a complete set of thermodynamic parameters (e.g. Keq (or ΔG), ΔH, ΔS, and n) for a ligand binding interaction described by a thermodynamic model. Thermodynamic models are constructed by combination of equilibrium constant, mass balance, and charge balance equations for the system under study. Commercial ITC instruments are supplied with software that includes a number of simple interaction models, for example one binding site, two binding sites, sequential sites, and n-independent binding sites. More complex models for example, three or more binding sites, one site with multiple binding mechanisms, linked equilibria, or equilibria involving macromolecular conformational selection through ligand binding need to be developed on a case by case basis by the ITC user. In this paper we provide an algorithm (and a link to our MATLAB program) for the non-linear regression analysis of a multiple binding site model with up to four overlapping binding equilibria. Error analysis demonstrates that fitting ITC data for multiple parameters (e.g. up to nine parameters in the three binding site model) yields thermodynamic parameters with acceptable accuracy. PMID:23262283

  7. The Influence of Non-linear 3-D Mantle Rheology on Predictions of Glacial Isostatic Adjustment Models

    Science.gov (United States)

    Van Der Wal, W.; Barnhoorn, A.; Stocchi, P.; Drury, M. R.; Wu, P. P.; Vermeersen, B. L.

    2011-12-01

    Ice melting in Greenland and Antarctica can be estimated from GRACE satellite measurements. The largest source of error in these estimates is uncertainty in models for Glacial Isostatic Adjustment (GIA). GIA models that are used to correct the GRACE data have several shortcomings, including (i) mantle viscosity is only varied with depth, and (ii) stress-dependence of viscosity is ignored. Here we attempt to improve on these two issues with the ultimate goal of providing more realistic GIA predictions in areas that are currently ice covered. The improved model is first tested against observations in Fennoscandia, where there is good coverage with GIA observations, before applying it to Greenland. Deformation laws for diffusion and dislocation creep in olivine are taken from a compilation of laboratory experiments. Temperature is obtained from two different sources: surface heatflow maps as input for the heat transfer equation, and seismic velocity anomalies converted to upper mantle temperatures. Grain size and olivine water content are kept as free parameters. Surface loading is provided by an ice loading history that is constructed from constraints on past ice margins and input from climatology. The finite element model includes self-gravitation but not compressibility and background stresses. It is found that the viscosity in Fennoscandia changes in time by two orders of magnitude for a wet rheology with large grain size. The wet rheology provides the best fit to historic sea level data. However, present-day uplift and gravity rates are too low for such a rheology. We apply a wet rheology on Greenland, and simulate a Little Ice Age (LIA) increase in thickness on top of the ICE-5G ice loading history. Preliminary results show a negative geoid rate of magnitude more than 0.5 mm/year due to the LIA increase in ice thickness in combination with the non-linear upper mantle rheology. More tests are necessary to determine the influence of mantle rheology on GIA model

  8. Constraints of GRACE on the Ice Model and Mantle Rheology in Glacial Isostatic Adjustment Modeling in North-America

    Science.gov (United States)

    van der Wal, W.; Wu, P.; Sideris, M.; Wang, H.

    2009-05-01

    GRACE satellite data offer homogeneous coverage of the area covered by the former Laurentide ice sheet. The secular gravity rate estimated from the GRACE data can therefore be used to constrain the ice loading history in Laurentide and, to a lesser extent, the mantle rheology in a GIA model. The objective of this presentation is to find a best fitting global ice model and use it to study how the ice model can be modified to fit a composite rheology, in which creep rates from a linear and non-linear rheology are added. This is useful because all the ice models constructed from GIA assume that mantle rheology is linear, but creep experiments on rocks show that nonlinear rheology may be the dominant mechanism in some parts of the mantle. We use CSR release 4 solutions from August 2002 to October 2008 with continental water storage effects removed by the GLDAS model and filtering with a destriping and Gaussian filter. The GIA model is a radially symmetric incompressible Maxwell Earth, with varying upper and lower mantle viscosity. Gravity rate misfit values are computed for with a range of viscosity values with the ICE-3G, ICE-4G and ICE-5G models. The best fit is shown for models with ICE-3G and ICE-4G, and the ICE-4G model is selected for computations with a so-called composite rheology. For the composite rheology, the Coupled Laplace Finite-Element Method is used to compute the GIA response of a spherical self-gravitating incompressible Maxwell Earth. The pre-stress exponent (A) derived from a uni- axial stress experiment is varied between 3.3 x 10-34/10-35/10-36 Pa-3s-1, the Newtonian viscosity η is varied between 1 and 3 x 1021 Pa-s, and the stress exponent is taken to be 3. Composite rheology in general results in geoid rates that are too small compared to GRACE observations. Therefore, simple modifications of the ICE-4G history are investigated by scaling ice heights or delaying glaciation. It is found that a delay in glaciation is a better way to adjust ice

  9. Neural Models: An Option to Estimate Seismic Parameters of Accelerograms

    Science.gov (United States)

    Alcántara, L.; García, S.; Ovando-Shelley, E.; Macías, M. A.

    2014-12-01

    Seismic instrumentation for recording strong earthquakes, in Mexico, goes back to the 60´s due the activities carried out by the Institute of Engineering at Universidad Nacional Autónoma de México. However, it was after the big earthquake of September 19, 1985 (M=8.1) when the project of seismic instrumentation assumes a great importance. Currently, strong ground motion networks have been installed for monitoring seismic activity mainly along the Mexican subduction zone and in Mexico City. Nevertheless, there are other major regions and cities that can be affected by strong earthquakes and have not yet begun their seismic instrumentation program or this is still in development.Because of described situation some relevant earthquakes (e.g. Huajuapan de León Oct 24, 1980 M=7.1, Tehuacán Jun 15, 1999 M=7 and Puerto Escondido Sep 30, 1999 M= 7.5) have not been registered properly in some cities, like Puebla and Oaxaca, and that were damaged during those earthquakes. Fortunately, the good maintenance work carried out in the seismic network has permitted the recording of an important number of small events in those cities. So in this research we present a methodology based on the use of neural networks to estimate significant duration and in some cases the response spectra for those seismic events. The neural model developed predicts significant duration in terms of magnitude, epicenter distance, focal depth and soil characterization. Additionally, for response spectra we used a vector of spectral accelerations. For training the model we selected a set of accelerogram records obtained from the small events recorded in the strong motion instruments installed in the cities of Puebla and Oaxaca. The final results show that neural networks as a soft computing tool that use a multi-layer feed-forward architecture provide good estimations of the target parameters and they also have a good predictive capacity to estimate strong ground motion duration and response spectra.

  10. Checking the new IRI model The bottomside B parameters

    CERN Document Server

    Mosert, M; Ezquer, R; Lazo, B; Miro, G

    2002-01-01

    Electron density profiles obtained at Pruhonice (50.0, 15.0), El Arenosillo (37.1, 353.2) and Havana (23, 278) were used to check the bottom-side B parameters BO (thickness parameter) and B1 (shape parameter) predicted by the new IRI - 2000 version. The electron density profiles were derived from ionograms using the ARP technique. The data base includes daytime and nighttime ionograms recorded under different seasonal and solar activity conditions. Comparisons with IRI predictions were also done. The analysis shows that: a) The parameter B1 given by IRI 2000 reproduces better the observed ARP values than the IRI-90 version and b) The observed BO values are in general well reproduced by both IRI versions: IRI-90 and IRI-2000.

  11. Development of a Risk-adjustment Model for the Inpatient Rehabilitation Facility Discharge Self-care Functional Status Quality Measure.

    Science.gov (United States)

    Deutsch, Anne; Pardasaney, Poonam; Iriondo-Perez, Jeniffer; Ingber, Melvin J; Porter, Kristie A; McMullen, Tara

    2017-07-01

    Functional status measures are important patient-centered indicators of inpatient rehabilitation facility (IRF) quality of care. We developed a risk-adjusted self-care functional status measure for the IRF Quality Reporting Program. This paper describes the development and performance of the measure's risk-adjustment model. Our sample included IRF Medicare fee-for-service patients from the Centers for Medicare & Medicaid Services' 2008-2010 Post-Acute Care Payment Reform Demonstration. Data sources included the Continuity Assessment Record and Evaluation Item Set, IRF-Patient Assessment Instrument, and Medicare claims. Self-care scores were based on 7 Continuity Assessment Record and Evaluation items. The model was developed using discharge self-care score as the dependent variable, and generalized linear modeling with generalized estimation equation to account for patient characteristics and clustering within IRFs. Patient demographics, clinical characteristics at IRF admission, and clinical characteristics related to the recent hospitalization were tested as risk adjusters. A total of 4769 patient stays from 38 IRFs were included. Approximately 57% of the sample was female; 38.4%, 75-84 years; and 31.0%, 65-74 years. The final model, containing 77 risk adjusters, explained 53.7% of variance in discharge self-care scores (Pcare function was the strongest predictor, followed by admission cognitive function and IRF primary diagnosis group. The range of expected and observed scores overlapped very well, with little bias across the range of predicted self-care functioning. Our risk-adjustment model demonstrated strong validity for predicting discharge self-care scores. Although the model needs validation with national data, it represents an important first step in evaluation of IRF functional outcomes.

  12. Adjust cut-off values of immunohistochemistry models to predict risk of distant recurrence in invasive breast carcinoma patients

    Directory of Open Access Journals (Sweden)

    Yen-Ying Chen

    2016-12-01

    Conclusion: It is necessary to adjust the cut-off values of IHC-based prognostic models to fit the purpose. If the estimated risk is clearly high or low, it may be reasonable to omit multigene assays when cost is a consideration.

  13. Patterns of Children's Adrenocortical Reactivity to Interparental Conflict and Associations with Child Adjustment: A Growth Mixture Modeling Approach

    Science.gov (United States)

    Koss, Kalsea J.; George, Melissa R. W.; Davies, Patrick T.; Cicchetti, Dante; Cummings, E. Mark; Sturge-Apple, Melissa L.

    2013-01-01

    Examining children's physiological functioning is an important direction for understanding the links between interparental conflict and child adjustment. Utilizing growth mixture modeling, the present study examined children's cortisol reactivity patterns in response to a marital dispute. Analyses revealed three different patterns of cortisol…

  14. Modeling and Dynamic Properties of a Four-Parameter Zener Model Vibration Isolator

    Directory of Open Access Journals (Sweden)

    Wen-ku Shi

    2016-01-01

    Full Text Available To install high-performance isolators in a limited installation space, a novel passive isolator based on the four-parameter Zener model is proposed. The proposed isolator consists of three major parts, namely, connecting structure, sealing construction, and upper and lower cavities, all of which are enclosed by four segments of metal bellows with the same diameter. The equivalent stiffness and damping model of the isolator are derived from the dynamic stiffness of the isolation system. Experiments are conducted, and the experiment error is analyzed. Test results verify the validity of the model. Theoretical analysis and numerical simulation reveal that the stiffness and damping of the isolator have multiple properties with different exciting amplitudes and structural parameters. In consideration of the design of the structural parameter, the effects of exciting amplitude, damp channel diameter, equivalent cylinder diameter of cavities, sum of the stiffness of the bellows at the end of the isolator, and length of damp channel on the dynamic properties of the isolator are discussed comprehensively. A design method based on the parameter sensitivity of the isolator’s design parameter is proposed. Thus, the novel isolator can be practically applied to engineering and provide a significant contribution in the field.

  15. Parameter sensitivity and uncertainty analysis for a storm surge and wave model

    Directory of Open Access Journals (Sweden)

    L. A. Bastidas

    2016-09-01

    Full Text Available Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991 utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland. The sensitive model parameters (of 11 total considered include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters, and depth-induced breaking αB and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large number of interactions between parameters and a nonlinear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.

  16. Eurodollar futures and options: convexity adjustment in HJM one- factor model

    OpenAIRE

    Henrard Marc

    2005-01-01

    In this note we give pricing formulas for different instruments linked to rate futures (euro-dollar futures). We provide the future price including the convexity adjustment and the exact dates. Based on that result we price options on futures, including the mid-curve options.

  17. Using multilevel modelling to assess case-mix adjusters in consumers experience surveys in health care

    NARCIS (Netherlands)

    Damman, O.C.; Stubbe, J.H.; Hendriks, M.; Arah, O.A.; Spreeuwenberg, P.; Delnoij, D.M.J.; Groenewegen, P.P.

    2009-01-01

    Background: Ratings on the quality of healthcare from the consumer’s perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for

  18. Using multilevel modeling to assess case-mix adjusters in consumer experience surveys in health care.

    NARCIS (Netherlands)

    Damman, O.C.; Stubbe, J.H.; Hendriks, M.; Arah, O.A.; Spreeuwenberg, P.; Delnoij, D.M.J.; Groenewegen, P.P.

    2009-01-01

    Background: Ratings on the quality of healthcare from the consumer’s perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for

  19. New trends in parameter identification for mathematical models

    CERN Document Server

    Leitão, Antonio; Zubelli, Jorge

    2018-01-01

    The Proceedings volume contains 16 contributions to the IMPA conference “New Trends in Parameter Identification for Mathematical Models”, Rio de Janeiro, Oct 30 – Nov 3, 2017, integrating the “Chemnitz Symposium on Inverse Problems on Tour”.  This conference is part of the “Thematic Program on Parameter Identification in Mathematical Models” organized  at IMPA in October and November 2017. One goal is to foster the scientific collaboration between mathematicians and engineers from the Brazialian, European and Asian communities. Main topics are iterative and variational regularization methods in Hilbert and Banach spaces for the stable approximate solution of ill-posed inverse problems, novel methods for parameter identification in partial differential equations, problems of tomography ,  solution of coupled conduction-radiation problems at high temperatures, and the statistical solution of inverse problems with applications in physics.

  20. Temporal variation and scaling of parameters for a monthly hydrologic model

    Science.gov (United States)

    Deng, Chao; Liu, Pan; Wang, Dingbao; Wang, Weiguang

    2018-03-01

    The temporal variation of model parameters is affected by the catchment conditions and has a significant impact on hydrological simulation. This study aims to evaluate the seasonality and downscaling of model parameter across time scales based on monthly and mean annual water balance models with a common model framework. Two parameters of the monthly model, i.e., k and m, are assumed to be time-variant at different months. Based on the hydrological data set from 121 MOPEX catchments in the United States, we firstly analyzed the correlation between parameters (k and m) and catchment properties (NDVI and frequency of rainfall events, α). The results show that parameter k is positively correlated with NDVI or α, while the correlation is opposite for parameter m, indicating that precipitation and vegetation affect monthly water balance by controlling temporal variation of parameters k and m. The multiple linear regression is then used to fit the relationship between ε and the means and coefficient of variations of parameters k and m. Based on the empirical equation and the correlations between the time-variant parameters and NDVI, the mean annual parameter ε is downscaled to monthly k and m. The results show that it has lower NSEs than these from model with time-variant k and m being calibrated through SCE-UA, while for several study catchments, it has higher NSEs than that of the model with constant parameters. The proposed method is feasible and provides a useful tool for temporal scaling of model parameter.

  1. Importance of hydrological parameters in contaminant transport modeling in a terrestrial environment

    International Nuclear Information System (INIS)

    Tsuduki, Katsunori; Matsunaga, Takeshi

    2007-01-01

    A grid type multi-layered distributed parameter model for calculating discharge in a watershed was described. Model verification with our field observation resulted in different sets of hydrological parameter values, all of which reproduced the observed discharge. The effect of those varied hydrological parameters on contaminant transport calculation was examined and discussed by simulation of event water transfer. (author)

  2. Zener Diode Compact Model Parameter Extraction Using Xyce-Dakota Optimization.

    Energy Technology Data Exchange (ETDEWEB)

    Buchheit, Thomas E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wilcox, Ian Zachary [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sandoval, Andrew J [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Reza, Shahed [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-12-01

    This report presents a detailed process for compact model parameter extraction for DC circuit Zener diodes. Following the traditional approach of Zener diode parameter extraction, circuit model representation is defined and then used to capture the different operational regions of a real diode's electrical behavior. The circuit model contains 9 parameters represented by resistors and characteristic diodes as circuit model elements. The process of initial parameter extraction, the identification of parameter values for the circuit model elements, is presented in a way that isolates the dependencies between certain electrical parameters and highlights both the empirical nature of the extraction and portions of the real diode physical behavior which of the parameters are intended to represent. Optimization of the parameters, a necessary part of a robost parameter extraction process, is demonstrated using a 'Xyce-Dakota' workflow, discussed in more detail in the report. Among other realizations during this systematic approach of electrical model parameter extraction, non-physical solutions are possible and can be difficult to avoid because of the interdependencies between the different parameters. The process steps described are fairly general and can be leveraged for other types of semiconductor device model extractions. Also included in the report are recommendations for experiment setups for generating optimum dataset for model extraction and the Parameter Identification and Ranking Table (PIRT) for Zener diodes.

  3. Lumped Parameter Modeling for Rapid Vibration Response Prototyping and Test Correlation for Electronic Units

    Science.gov (United States)

    Van Dyke, Michael B.

    2013-01-01

    Present preliminary work using lumped parameter models to approximate dynamic response of electronic units to random vibration; Derive a general N-DOF model for application to electronic units; Illustrate parametric influence of model parameters; Implication of coupled dynamics for unit/board design; Demonstrate use of model to infer printed wiring board (PWB) dynamics from external chassis test measurement.

  4. Convexity Adjustments

    DEFF Research Database (Denmark)

    M. Gaspar, Raquel; Murgoci, Agatha

    2010-01-01

    A convexity adjustment (or convexity correction) in fixed income markets arises when one uses prices of standard (plain vanilla) products plus an adjustment to price nonstandard products. We explain the basic and appealing idea behind the use of convexity adjustments and focus on the situations...

  5. Parameter optimization method for the water quality dynamic model based on data-driven theory.

    Science.gov (United States)

    Liang, Shuxiu; Han, Songlin; Sun, Zhaochen

    2015-09-15

    Parameter optimization is important for developing a water quality dynamic model. In this study, we applied data-driven method to select and optimize parameters for a complex three-dimensional water quality model. First, a data-driven model was developed to train the response relationship between phytoplankton and environmental factors based on the measured data. Second, an eight-variable water quality dynamic model was established and coupled to a physical model. Parameter sensitivity analysis was investigated by changing parameter values individually in an assigned range. The above results served as guidelines for the control parameter selection and the simulated result verification. Finally, using the data-driven model to approximate the computational water quality model, we employed the Particle Swarm Optimization (PSO) algorithm to optimize the control parameters. The optimization routines and results were analyzed and discussed based on the establishment of the water quality model in Xiangshan Bay (XSB). Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Constructing Approximate Confidence Intervals for Parameters with Structural Equation Models

    Science.gov (United States)

    Cheung, Mike W. -L.

    2009-01-01

    Confidence intervals (CIs) for parameters are usually constructed based on the estimated standard errors. These are known as Wald CIs. This article argues that likelihood-based CIs (CIs based on likelihood ratio statistics) are often preferred to Wald CIs. It shows how the likelihood-based CIs and the Wald CIs for many statistics and psychometric…

  7. Transformations among CE–CVM model parameters for ...

    Indian Academy of Sciences (India)

    Unknown

    (CECs) of a higher order system in terms of those of the lower order subsystems and to an independent set of parameters which exclusively represent interactions of the higher order systems. Such a procedure is presen- ted in detail in this communication. Furthermore, the details of transformations required to express the ...

  8. Filling Gaps in the Acculturation Gap-Distress Model: Heritage Cultural Maintenance and Adjustment in Mexican-American Families.

    Science.gov (United States)

    Telzer, Eva H; Yuen, Cynthia; Gonzales, Nancy; Fuligni, Andrew J

    2016-07-01

    The acculturation gap-distress model purports that immigrant children acculturate faster than do their parents, resulting in an acculturation gap that leads to family and youth maladjustment. However, empirical support for the acculturation gap-distress model has been inconclusive. In the current study, 428 Mexican-American adolescents (50.2 % female) and their primary caregivers independently completed questionnaires assessing their levels of American and Mexican cultural orientation, family functioning, and youth adjustment. Contrary to the acculturation gap-distress model, acculturation gaps were not associated with poorer family or youth functioning. Rather, adolescents with higher levels of Mexican cultural orientations showed positive outcomes, regardless of their parents' orientations to either American or Mexican cultures. Findings suggest that youths' heritage cultural maintenance may be most important for their adjustment.

  9. Three-dimensional FEM model of FBGs in PANDA fibers with experimentally determined model parameters

    Science.gov (United States)

    Lindner, Markus; Hopf, Barbara; Koch, Alexander W.; Roths, Johannes

    2017-04-01

    A 3D-FEM model has been developed to improve the understanding of multi-parameter sensing with Bragg gratings in attached or embedded polarization maintaining fibers. The material properties of the fiber, especially Young's modulus and Poisson's ratio of the fiber's stress applying parts, are crucial for accurate simulations, but are usually not provided by the manufacturers. A methodology is presented to determine the unknown parameters by using experimental characterizations of the fiber and iterative FEM simulations. The resulting 3D-Model is capable of describing the change in birefringence of the free fiber when exposed to longitudinal strain. In future studies the 3D-FEM model will be employed to study the interaction of PANDA fibers with the surrounding materials in which they are embedded.

  10. Parameter sensitivity and identifiability for a biogeochemical model of hypoxia in the northern Gulf of Mexico

    Science.gov (United States)

    Local sensitivity analyses and identifiable parameter subsets were used to describe numerical constraints of a hypoxia model for bottom waters of the northern Gulf of Mexico. The sensitivity of state variables differed considerably with parameter changes, although most variables ...

  11. Estimation of hydrodinamics parameters in a volcanic fractured phreatic aquifer in Costa Rica. Part II. Double porosity model

    International Nuclear Information System (INIS)

    Macias, Julio; Vargas, Asdrubal

    2017-01-01

    MIM 1D transport model was successfully applied to simulate the asymmetric behavior observed in three breakthrough curves of tracer tests performed under natural gradient conditions in a phreatic fractured volcanic aquifer. The transport parameters obtained after adjustment with a computer program, suggest that only 50% of the total porosity effectively contributed to the advective-dispersive transport (mobile fraction) and the other 50% behaved as a temporary reservoir for the tracer (immobile fraction). The estimated values of hydraulic properties and MIM model parameters are within the range of values reported by other researchers. It was possible to establish a conceptual and numerical framework to explain the three-tracer tests curves behavior, despite the limitations in quality and quantity of available field information. (author) [es

  12. On 4-degree-of-freedom biodynamic models of seated occupants: Lumped-parameter modeling

    Science.gov (United States)

    Bai, Xian-Xu; Xu, Shi-Xu; Cheng, Wei; Qian, Li-Jun

    2017-08-01

    It is useful to develop an effective biodynamic model of seated human occupants to help understand the human vibration exposure to transportation vehicle vibrations and to help design and improve the anti-vibration devices and/or test dummies. This study proposed and demonstrated a methodology for systematically identifying the best configuration or structure of a 4-degree-of-freedom (4DOF) human vibration model and for its parameter identification. First, an equivalent simplification expression for the models was made. Second, all of the possible 23 structural configurations of the models were identified. Third, each of them was calibrated using the frequency response functions recommended in a biodynamic standard. An improved version of non-dominated sorting genetic algorithm (NSGA-II) based on Pareto optimization principle was used to determine the model parameters. Finally, a model evaluation criterion proposed in this study was used to assess the models and to identify the best one, which was based on both the goodness of curve fits and comprehensive goodness of the fits. The identified top configurations were better than those reported in the literature. This methodology may also be extended and used to develop the models with other DOFs.

  13. MODELING OF THE HEAT PUMP STATION ADJUSTABLE LOOP OF AN INTERMEDIATE HEAT-TRANSFER AGENT (Part I

    Directory of Open Access Journals (Sweden)

    Sit B.

    2009-08-01

    Full Text Available There are examined equations of dynamics and statics of an adjustable intermediate loop of heat pump carbon dioxide station in this paper. Heat pump station is a part of the combined heat supply system. Control of transferred thermal capacity from the source of low potential heat source is realized by means of changing the speed of circulation of a liquid in the loop and changing the area of a heat-transmitting surface, both in the evaporator, and in the intermediate heat exchanger depending on the operating parameter, for example, external air temperature and wind speed.

  14. An Investigation of Invariance Properties of One, Two and Three Parameter Logistic Item Response Theory Models

    Directory of Open Access Journals (Sweden)

    O.A. Awopeju

    2017-12-01

    Full Text Available The study investigated the invariance properties of one, two and three parame-ter logistic item response theory models. It examined the best fit among one parameter logistic (1PL, two-parameter logistic (2PL and three-parameter logistic (3PL IRT models for SSCE, 2008 in Mathematics. It also investigated the degree of invariance of the IRT models based item difficulty parameter estimates in SSCE in Mathematics across different samples of examinees and examined the degree of invariance of the IRT models based item discrimination estimates in SSCE in Mathematics across different samples of examinees. In order to achieve the set objectives, 6000 students (3000 males and 3000 females were drawn from the population of 35262 who wrote the 2008 paper 1 Senior Secondary Certificate Examination (SSCE in Mathematics organized by National Examination Council (NECO. The item difficulty and item discrimination parameter estimates from CTT and IRT were tested for invariance using BLOG MG 3 and correlation analysis was achieved using SPSS version 20. The research findings were that two parameter model IRT item difficulty and discrimination parameter estimates exhibited invariance property consistently across different samples and that 2-parameter model was suitable for all samples of examinees unlike one-parameter model and 3-parameter model.

  15. Numerical Modeling of Piezoelectric Transducers Using Physical Parameters

    NARCIS (Netherlands)

    Cappon, H.; Keesman, K.J.

    2012-01-01

    Design of ultrasonic equipment is frequently facilitated with numerical models. These numerical models, however, need a calibration step, because usually not all characteristics of the materials used are known. Characterization of material properties combined with numerical simulations and

  16. Development of simple kinetic models and parameter estimation for ...

    African Journals Online (AJOL)

    PANCHIGA

    2016-09-28

    Sep 28, 2016 ... by methanol. In this study, the unstructured models based on growth kinetic equation, fed-batch mass balance and constancy of cell and protein yields were developed and constructed following the substrates, glycerol and methanol. The growth model on glycerol is mostly published while the growth model ...

  17. Parameter study of a model for NOx emissions from PFBC

    DEFF Research Database (Denmark)

    Jensen, Anker Degn; Johnsson, Jan Erik

    1996-01-01

    Simulations with a mathematical model of a pressurized bubbling fluidized bed combustor (PFBC) combined with a kinetic model for NO formation and reduction are presented and discussed. The kinetic model for NO formation and reduction considers NO and NH3 as the fixed nitrogen species, and include...

  18. Kinetic models and parameters estimation study of biomass and ...

    African Journals Online (AJOL)

    The growth kinetics and modeling of ethanol production from inulin by Pichia caribbica (KC977491) were studied in a batch system. Unstructured models were proposed using the logistic equation for growth, the Luedeking-Piret equation for ethanol production and modified Leudeking-Piret model for substrate consumption.

  19. Kinetic models and parameters estimation study of biomass and ...

    African Journals Online (AJOL)

    compaq

    2017-01-11

    Jan 11, 2017 ... The growth kinetics and modeling of ethanol production from inulin by Pichia caribbica (KC977491) were studied in a batch system. Unstructured models were proposed using the logistic equation for growth, the Luedeking-Piret equation for ethanol production and modified Leudeking-Piret model for.

  20. Parameter estimation of electricity spot models from futures prices

    NARCIS (Netherlands)

    Aihara, ShinIchi; Bagchi, Arunabha; Imreizeeq, E.S.N.; Walter, E.

    We consider a slight perturbation of the Schwartz-Smith model for the electricity futures prices and the resulting modified spot model. Using the martingale property of the modified price under the risk neutral measure, we derive the arbitrage free model for the spot and futures prices. We estimate

  1. Gas ultracentrifuge separative parameters modeling using hybrid neural networks

    International Nuclear Information System (INIS)

    Crus, Maria Ursulina de Lima

    2005-01-01

    A hybrid neural network is developed for the calculation of the separative performance of an ultracentrifuge. A feed forward neural network is trained to estimate the internal flow parameters of a gas ultracentrifuge, and then these parameters are applied in the diffusion equation. For this study, a 573 experimental data set is used to establish the relation between the separative performance and the controlled variables. The process control variables considered are: the feed flow rate F, the cut θ and the product pressure Pp. The mechanical arrangements consider the radial waste scoop dimension, the rotating baffle size D s and the axial feed location Z E . The methodology was validated through the comparison of the calculated separative performance with experimental values. This methodology may be applied to other processes, just by adapting the phenomenological procedures. (author)

  2. Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares

    Science.gov (United States)

    Grauer, Jared A.; Morelli, Eugene A.

    2016-01-01

    A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.

  3. Order parameter model for unstable multilane traffic flow

    OpenAIRE

    Lubashevsky, Ihor A.; Mahnke, Reinhard

    1999-01-01

    We discuss a phenomenological approach to the description of unstable vehicle motion on multilane highways that explains in a simple way the observed sequence of the phase transitions "free flow -> synchronized motion -> jam" as well as the hysteresis in the transition "free flow synchronized motion". We introduce a new variable called order parameter that accounts for possible correlations in the vehicle motion at different lanes. So, it is principally due to the "many-body" effects in the ...

  4. Connecting Global to Local Parameters in Barred Galaxy Models

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    The velocity and the angular velocity units are 10 km/s and 10 km/s/kpc, respectively while G is equal to unity. Our test particle is a star of mass = 1. Therefore, the energy unit (per unit mass) is 100(km/s)2. In these units the values of the parameters are α = 12 kpc,b = 2,cb = 1.5 kpc,Md = 9500 and Mb = 3000. It is evident that ...

  5. COMPUTER MODELING OF HYDRODYNAMIC PARAMETERS AT BOUNDARIES OF WATER INTAKE AREA WITH FILTERING INTAKE

    Directory of Open Access Journals (Sweden)

    Boronina Lyudmila Vladimirovna

    2012-12-01

    Full Text Available Improvement of water intake technologies are of great importance. These technologies are required to provide high quality water intake and treatment; they must be sufficiently simple and reliable, and they must be easily adjustable to particular local conditions. A mathematical model of a water supply area near the filtering water intake is proposed. On its basis, a software package designated for the calculation of parameters of the supply area along with its graphical representation is developed. To improve the efficiency of water treatment plants, the authors propose a new method of their integration into the landscape by taking account of velocity distributions in the water supply area within the water reservoir where the plant installation is planned. In the proposed relationship, the filtration rate and the scattering rate at the outlet of the supply area are taken into account, and they assure more precise projections of the inlet velocity. In the present study, assessment of accuracy of the mathematical model involving the scattering of a turbulent flow has been done. The assessment procedure is based on verification of the mean values equality hypothesis and on comparison with the experimental data. The results and conclusions obtained by means of the method developed by the authors have been verified through comparison of deviations of specific values calculated through the employment of similar algorithms in MathCAD, Maple and PLUMBING. The method of the water supply area analysis, with the turbulent scattering area having been taken into account, and the software package enable to numerically estimate the efficiency of the pre-purification process by tailoring a number of parameters of the filtering component of the water intake to the river hydrodynamic properties. Therefore, the method and the software package provide a new tool for better design, installation and operation of water treatment plants with respect to filtration and

  6. Entropy Parameter M in Modeling a Flow Duration Curve

    Directory of Open Access Journals (Sweden)

    Yu Zhang

    2017-12-01

    Full Text Available A flow duration curve (FDC is widely used for predicting water supply, hydropower, environmental flow, sediment load, and pollutant load. Among different methods of constructing an FDC, the entropy-based method, developed recently, is appealing because of its several desirable characteristics, such as simplicity, flexibility, and statistical basis. This method contains a parameter, called entropy parameter M, which constitutes the basis for constructing the FDC. Since M is related to the ratio of the average streamflow to the maximum streamflow which, in turn, is related to the drainage area, it may be possible to determine M a priori and construct an FDC for ungauged basins. This paper, therefore, analyzed the characteristics of M in both space and time using streamflow data from 73 gauging stations in the Brazos River basin, Texas, USA. Results showed that the M values were impacted by reservoir operation and possibly climate change. The values were fluctuating, but relatively stable, after the operation of the reservoirs. Parameter M was found to change inversely with the ratio of average streamflow to the maximum streamflow. When there was an extreme event, there occurred a jump in the M value. Further, spatially, M had a larger value if the drainage area was small.

  7. Optimization of process parameters through GRA, TOPSIS and RSA models

    Directory of Open Access Journals (Sweden)

    Suresh Nipanikar

    2018-01-01

    Full Text Available This article investigates the effect of cutting parameters on the surface roughness and flank wear during machining of titanium alloy Ti-6Al-4V ELI( Extra Low Interstitial in minimum quantity lubrication environment by using PVD TiAlN insert. Full factorial design of experiment was used for the machining 2 factors 3 levels and 2 factors 2 levels. Turning parameters studied were cutting speed (50, 65, 80 m/min, feed (0.08, 0.15, 0.2 mm/rev and depth of cut 0.5 mm constant. The results show that 44.61 % contribution of feed and 43.57 % contribution of cutting speed on surface roughness also 53.16 % contribution of cutting tool and 26.47 % contribution of cutting speed on tool flank wear. Grey relational analysis and TOPSIS method suggest the optimum combinations of machining parameters as cutting speed: 50 m/min, feed: 0.8 mm/rev., cutting tool: PVD TiAlN, cutting fluid: Palm oi

  8. Parameter Extraction for PSpice Models by means of an Automated Optimization Tool – An IGBT model Study Case

    DEFF Research Database (Denmark)

    Suárez, Carlos Gómez; Reigosa, Paula Diaz; Iannuzzo, Francesco

    2016-01-01

    An original tool for parameter extraction of PSpice models has been released, enabling a simple parameter identification. A physics-based IGBT model is used to demonstrate that the optimization tool is capable of generating a set of parameters which predicts the steady-state and switching behavior...

  9. Atomic modeling of cryo-electron microscopy reconstructions--joint refinement of model and imaging parameters.

    Science.gov (United States)

    Chapman, Michael S; Trzynka, Andrew; Chapman, Brynmor K

    2013-04-01

    When refining the fit of component atomic structures into electron microscopic reconstructions, use of a resolution-dependent atomic density function makes it possible to jointly optimize the atomic model and imaging parameters of the microscope. Atomic density is calculated by one-dimensional Fourier transform of atomic form factors convoluted with a microscope envelope correction and a low-pass filter, allowing refinement of imaging parameters such as resolution, by optimizing the agreement of calculated and experimental maps. A similar approach allows refinement of atomic displacement parameters, providing indications of molecular flexibility even at low resolution. A modest improvement in atomic coordinates is possible following optimization of these additional parameters. Methods have been implemented in a Python program that can be used in stand-alone mode for rigid-group refinement, or embedded in other optimizers for flexible refinement with stereochemical restraints. The approach is demonstrated with refinements of virus and chaperonin structures at resolutions of 9 through 4.5 Å, representing regimes where rigid-group and fully flexible parameterizations are appropriate. Through comparisons to known crystal structures, flexible fitting by RSRef is shown to be an improvement relative to other methods and to generate models with all-atom rms accuracies of 1.5-2.5 Å at resolutions of 4.5-6 Å. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. An approach to measure parameter sensitivity in watershed hydrologic modeling

    Data.gov (United States)

    U.S. Environmental Protection Agency — Abstract Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier...

  11. Weibull Parameters Estimation Based on Physics of Failure Model

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... for degradation modeling and failure criteria determination. The time dependent accumulated damage is assumed linearly proportional to the time dependent degradation level. It is observed that the deterministic accumulated damage at the level of unity closely estimates the characteristic fatigue life of Weibull...

  12. The Effect of Nondeterministic Parameters on Shock-Associated Noise Prediction Modeling

    Science.gov (United States)

    Dahl, Milo D.; Khavaran, Abbas

    2010-01-01

    Engineering applications for aircraft noise prediction contain models for physical phenomenon that enable solutions to be computed quickly. These models contain parameters that have an uncertainty not accounted for in the solution. To include uncertainty in the solution, nondeterministic computational methods are applied. Using prediction models for supersonic jet broadband shock-associated noise, fixed model parameters are replaced by probability distributions to illustrate one of these methods. The results show the impact of using nondeterministic parameters both on estimating the model output uncertainty and on the model spectral level prediction. In addition, a global sensitivity analysis is used to determine the influence of the model parameters on the output, and to identify the parameters with the least influence on model output.

  13. Parameter Estimation and Prediction of a Nonlinear Storage Model: an algebraic approach

    NARCIS (Netherlands)

    Doeswijk, T.G.; Keesman, K.J.

    2005-01-01

    Generally, parameters that are nonlinear in system models are estimated by nonlinear least-squares optimization algorithms. In this paper, if a nonlinear discrete-time model with a polynomial quotient structure in input, output, and parameters, a method is proposed to re-parameterize the model such

  14. Identification of the 1PL Model with Guessing Parameter: Parametric and Semi-Parametric Results

    Science.gov (United States)

    San Martin, Ernesto; Rolin, Jean-Marie; Castro, Luis M.

    2013-01-01

    In this paper, we study the identification of a particular case of the 3PL model, namely when the discrimination parameters are all constant and equal to 1. We term this model, 1PL-G model. The identification analysis is performed under three different specifications. The first specification considers the abilities as unknown parameters. It is…

  15. Connecting Global to Local Parameters in Barred Galaxy Models

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Abstract. We present connections between global and local parame- ters in a realistic dynamical model, describing motion in a barred galaxy. Expanding the global model in the vicinity of a stable Lagrange point, we find the potential of a two-dimensional perturbed harmonic oscillator, which describes local motion near the ...

  16. Mathematical modelling in blood coagulation : simulation and parameter estimation

    NARCIS (Netherlands)

    W.J.H. Stortelder (Walter); P.W. Hemker (Piet); H.C. Hemker

    1997-01-01

    textabstractThis paper describes the mathematical modelling of a part of the blood coagulation mechanism. The model includes the activation of factor X by a purified enzyme from Russel's Viper Venom (RVV), factor V and prothrombin, and also comprises the inactivation of the products formed. In this

  17. On the multiple imputation variance estimator for control-based and delta-adjusted pattern mixture models.

    Science.gov (United States)

    Tang, Yongqiang

    2017-12-01

    Control-based pattern mixture models (PMM) and delta-adjusted PMMs are commonly used as sensitivity analyses in clinical trials with non-ignorable dropout. These PMMs assume that the statistical behavior of outcomes varies by pattern in the experimental arm in the imputation procedure, but the imputed data are typically analyzed by a standard method such as the primary analysis model. In the multiple imputation (MI) inference, Rubin's variance estimator is generally biased when the imputation and analysis models are uncongenial. One objective of the article is to quantify the bias of Rubin's variance estimator in the control-based and delta-adjusted PMMs for longitudinal continuous outcomes. These PMMs assume the same observed data distribution as the mixed effects model for repeated measures (MMRM). We derive analytic expressions for the MI treatment effect estimator and the associated Rubin's variance in these PMMs and MMRM as functions of the maximum likelihood estimator from the MMRM analysis and the observed proportion of subjects in each dropout pattern when the number of imputations is infinite. The asymptotic bias is generally small or negligible in the delta-adjusted PMM, but can be sizable in the control-based PMM. This indicates that the inference based on Rubin's rule is approximately valid in the delta-adjusted PMM. A simple variance estimator is proposed to ensure asymptotically valid MI inferences in these PMMs, and compared with the bootstrap variance. The proposed method is illustrated by the analysis of an antidepressant trial, and its performance is further evaluated via a simulation study. © 2017, The International Biometric Society.

  18. Modeling and Predicting the EUR/USD Exchange Rate: The Role of Nonlinear Adjustments to Purchasing Power Parity

    OpenAIRE

    Jesús Crespo Cuaresma; Anna Orthofer

    2010-01-01

    Reliable medium-term forecasts are essential for forward-looking monetary policy decisionmaking. Traditionally, predictions of the exchange rate tend to be linked to the equilibrium concept implied by the purchasing power parity (PPP) theory. In particular, the traditional benchmark for exchange rate models is based on a linear adjustment of the exchange rate to the level implied by PPP. In the presence of aggregation effects, transaction costs or uncertainty, however, economic theory predict...

  19. A new glacial isostatic adjustment model for Antarctica: calibrated and tested using observations of relative sea-level change and present-day uplift rates

    Science.gov (United States)

    Whitehouse, Pippa L.; Bentley, Michael J.; Milne, Glenn A.; King, Matt A.; Thomas, Ian D.

    2012-09-01

    We present a glacial isostatic adjustment (GIA) model for Antarctica. This is driven by a new deglaciation history that has been developed using a numerical ice-sheet model, and is constrained to fit observations of past ice extent. We test the sensitivity of the GIA model to uncertainties in the deglaciation history, and seek earth model parameters that minimize the misfit of model predictions to relative sea-level observations from Antarctica. We find that the relative sea-level predictions are fairly insensitive to changes in lithospheric thickness and lower mantle viscosity, but show high sensitivity to changes in upper mantle viscosity and constrain this value (95 per cent confidence) to lie in the range 0.8-2.0 × 1021 Pa s. Significant misfits at several sites may be due to errors in the deglaciation history, or unmodelled effects of lateral variations in Earth structure. When we compare our GIA model predictions with elastic-corrected GPS uplift rates we find that the predicted rates are biased high (weighted mean bias = 1.8 mm yr-1) and there is a weighted root-mean-square (WRMS) error of 2.9 mm yr-1. In particular, our model systematically over-predicts uplift rates in the Antarctica Peninsula, and we attempt to address this by adjusting the Late Holocene loading history in this region, within the bounds of uncertainty of the deglaciation model. Using this adjusted model the weighted mean bias improves from 1.8 to 1.2 mm yr-1, and the WRMS error is reduced to 2.3 mm yr-1, compared with 4.9 mm yr-1 for ICE-5G v1.2 and 5.0 mm yr-1 for IJ05. Finally, we place spatially variable error bars on our GIA uplift rate predictions, taking into account uncertainties in both the deglaciation history and modelled Earth viscosity structure. This work provides a new GIA correction for the GRACE data in Antarctica, thus permitting more accurate constraints to be placed on current ice-mass change.

  20. Personalization of models with many model parameters: an efficient sensitivity analysis approach.

    Science.gov (United States)

    Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T

    2015-10-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.

  1. Multiple adjustable vascular clamp prototype: feasibility study on an experimental model of end-to-side microsurgical vascular anastomosis.

    Science.gov (United States)

    Pereira, A; Ichihara, S; Collon, S; Bodin, F; Gay, A; Facca, S; Liverneaux, P

    2014-12-01

    The aim of this study was to establish the feasibility of microsurgical end-to-side vascular anastomosis with a multiclamp adjustable vascular clamp prototype in an inert experimental model. Our method consisted of performing an end-to-side microsurgical anastomosis with 10/0 suture on a 2-mm diameter segment. In group 1, the end-to-side segment was held in place by a double clamp and a single end clamp. In group 2, the segment was held in place with a single multiclamp adjustable clamp. The average time for performing the anastomosis was shorter in group 2. The average number of sutures was the same in both groups. No leak was found and permeability was always positive in both groups. Our results show that performing end-to-side anastomosis with a multiclamp adjustable vascular clamp is feasible in an inert experimental model. Feasibility in a live animal model has to be demonstrated before clinical use. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  2. Parameters extraction for the one-diode model of a solar cell

    Science.gov (United States)

    Sabadus, Andreea; Mihailetchi, Valentin; Paulescu, Marius

    2017-12-01

    This paper is focused on the numerical algorithms for solving the one-diode model of a crystalline solar cell. Numerical experiments show that, generally, the algorithms reproduce accurately the I-V characteristics while the modeled parameters (the diode saturation current, serial resistance and the diode ideality factor) experience a large dispersion. The question arising here is: which is the correct set of the modeled parameters? In order to address this issue, the extracted parameters are compared with the measured ones for a silicon solar cell produced at ISC Konstanz. An attempt to solve numerically the one-diode model for accurate parameters extraction is discussed.

  3. Model Predictive Control of Nonlinear Parameter Varying Systems via Receding Horizon Control Lyapunov Functions

    National Research Council Canada - National Science Library

    Sznaier, Mario

    2001-01-01

    .... In this chapter we propose a suboptimal regulator for nonlinear parameter varying, control affine systems based upon the combination of model predictive and control Lyapunov function techniques...

  4. Error propagation of partial least squares for parameters optimization in NIR modeling

    Science.gov (United States)

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-01

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.

  5. Parameter estimation of component reliability models in PSA model of Krsko NPP

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Vrbanic, I.

    2001-01-01

    In the paper, the uncertainty analysis of component reliability models for independent failures is shown. The present approach for parameter estimation of component reliability models in NPP Krsko is presented. Mathematical approaches for different types of uncertainty analyses are introduced and used in accordance with some predisposed requirements. Results of the uncertainty analyses are shown in an example for time-related components. As the most appropriate uncertainty analysis proved the Bayesian estimation with the numerical estimation of a posterior, which can be approximated with some appropriate probability distribution, in this paper with lognormal distribution.(author)

  6. Parameter dimensionality reduction of a conceptual model for streamflow prediction in Canadian, snowmelt dominated ungauged basins

    Science.gov (United States)

    Arsenault, Richard; Poissant, Dominique; Brissette, François

    2015-11-01

    This paper evaluated the effects of parametric reduction of a hydrological model on five regionalization methods and 267 catchments in the province of Quebec, Canada. The Sobol' variance-based sensitivity analysis was used to rank the model parameters by their influence on the model results and sequential parameter fixing was performed. The reduction in parameter correlations improved parameter identifiability, however this improvement was found to be minimal and was not transposed in the regionalization mode. It was shown that 11 of the HSAMI models' 23 parameters could be fixed with little or no loss in regionalization skill. The main conclusions were that (1) the conceptual lumped models used in this study did not represent physical processes sufficiently well to warrant parameter reduction for physics-based regionalization methods for the Canadian basins examined and (2) catchment descriptors did not adequately represent the relevant hydrological processes, namely snow accumulation and melt.

  7. Sensitivity analysis for the study of influential parameters in tyre models

    OpenAIRE

    Kiébré, Rimyaledgo; Anstett-Collin, Floriane; Basset, Michel

    2011-01-01

    International audience; This paper studies two tyre models, the Fiala model and the Pacejka model. Both models are nonlinear and depend on parameters which must be identified from measurement data. A major problem is to efficiently prepare and plan the experiments. It is necessary to determine the parameters which have the greatest influence on the model output, and account for the output uncertainty which must be reduced. Therefore, the methodology presented here will help to carry out a var...

  8. A latent parameter node-centric model for spatial networks.

    Directory of Open Access Journals (Sweden)

    Nicholas D Larusso

    Full Text Available Spatial networks, in which nodes and edges are embedded in space, play a vital role in the study of complex systems. For example, many social networks attach geo-location information to each user, allowing the study of not only topological interactions between users, but spatial interactions as well. The defining property of spatial networks is that edge distances are associated with a cost, which may subtly influence the topology of the network. However, the cost function over distance is rarely known, thus developing a model of connections in spatial networks is a difficult task. In this paper, we introduce a novel model for capturing the interaction between spatial effects and network structure. Our approach represents a unique combination of ideas from latent variable statistical models and spatial network modeling. In contrast to previous work, we view the ability to form long/short-distance connections to be dependent on the individual nodes involved. For example, a node's specific surroundings (e.g. network structure and node density may make it more likely to form a long distance link than other nodes with the same degree. To capture this information, we attach a latent variable to each node which represents a node's spatial reach. These variables are inferred from the network structure using a Markov Chain Monte Carlo algorithm. We experimentally evaluate our proposed model on 4 different types of real-world spatial networks (e.g. transportation, biological, infrastructure, and social. We apply our model to the task of link prediction and achieve up to a 35% improvement over previous approaches in terms of the area under the ROC curve. Additionally, we show that our model is particularly helpful for predicting links between nodes with low degrees. In these cases, we see much larger improvements over previous models.

  9. Return predictability and intertemporal asset allocation: Evidence from a bias-adjusted VAR model

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard

    with quarterly data from 1952 to 2006. The results show that correcting the VAR parameters for small-sample bias has both quantitatively and qualitatively important e¤ects on the strategic intertemporal part of optimal portfolio choice, especially for bonds: for intermediate values of risk...

  10. EMF 7 model comparisons: key relationships and parameters

    Energy Technology Data Exchange (ETDEWEB)

    Hickman, B.G.

    1983-12-01

    A simplified textbook model of aggregate demand and supply interprets the similarities and differences in the price and income responses of the various EMF 7 models to oil and policy shocks. The simplified model is a marriage of Hicks' classic IS-LM formulation of the Keynesian theory of effective demand with a rudimentary model of aggregate supply, combining a structural Phillips curve for wage determination and a markup theory of price determination. The reduced-form income equation from the fix-price IS-LM model is used to define an aggregate demand (AD) locus in P-Y space, showing alternative pairs of the implicit GNP deflator and real GNP which would simultaneously satisfy the saving-investment identity and the condition for money market equilibrium. An aggregate supply (AS) schedule is derived by a similar reduction of relations between output and labor demand, unemployment and wage inflation, and the wage-price-productivity nexus governing markup pricing. Given a particular econometric model it is possible to derive IS and LM curves algebraically. The resulting locuses would show alternative combinations of interest rate and real income which equilibrate real income identity on the IS side and the demand and supply of money on the LM side. By further substitution the reduced form fix-price income relation could be obtained for direct quantification of the AD locus. The AS schedule is obtainable by algebraic reduction of the structural supply side equations.

  11. Corruption of parameter behavior and regionalization by model and forcing data errors: A Bayesian example using the SNOW17 model

    Science.gov (United States)

    He, Minxue; Hogue, Terri S.; Franz, Kristie J.; Margulis, Steven A.; Vrugt, Jasper A.

    2011-07-01

    The current study evaluates the impacts of various sources of uncertainty involved in hydrologic modeling on parameter behavior and regionalization utilizing different Bayesian likelihood functions and the Differential Evolution Adaptive Metropolis (DREAM) algorithm. The developed likelihood functions differ in their underlying assumptions and treatment of error sources. We apply the developed method to a snow accumulation and ablation model (National Weather Service SNOW17) and generate parameter ensembles to predict snow water equivalent (SWE). Observational data include precipitation and air temperature forcing along with SWE measurements from 24 sites with diverse hydroclimatic characteristics. A multiple linear regression model is used to construct regionalization relationships between model parameters and site characteristics. Results indicate that model structural uncertainty has the largest influence on SNOW17 parameter behavior. Precipitation uncertainty is the second largest source of uncertainty, showing greater impact at wetter sites. Measurement uncertainty in SWE tends to have little impact on the final model parameters and resulting SWE predictions. Considering all sources of uncertainty, parameters related to air temperature and snowfall fraction exhibit the strongest correlations to site characteristics. Parameters related to the length of the melting period also show high correlation to site characteristics. Finally, model structural uncertainty and precipitation uncertainty dramatically alter parameter regionalization relationships in comparison to cases where only uncertainty in model parameters or output measurements is considered. Our results demonstrate that accurate treatment of forcing, parameter, model structural, and calibration data errors is critical for deriving robust regionalization relationships.

  12. Mirror symmetry for two-parameter models. Pt. 2

    International Nuclear Information System (INIS)

    Candelas, Philip; Font, Anamaria; Katz, Sheldon; Morrison, David R.

    1994-01-01

    We describe in detail the space of the two Kaehler parameters of the Calabi-Yau manifold P 4 (1,1,1,6,9) [D. R. Morrison, 1993] by exploiting mirror symmetry. The large complex structure limit of the mirror, which corresponds to the classical large radius limit, is found by studying the monodromy of the periods about the discriminant locus, the boundary of the moduli space corresponding to singular Calabi-Yau manifolds. A symplectic basis of periods is found and the action of the Sp(6, Z) generators of the modular group is determined. From the mirror map we compute the instanton expansion of the Yukawa couplings and the generalized N=2 index, arriving at the numbers of instantons of genus zero and genus one of each bidegree. We find that these numbers can be negative, even in genus zero. We also investigate an SL(2, Z) symmetry that acts on a boundary of the moduli space. ((orig.))

  13. Mathematical modeling to reconstruct Elastic and geoelectrical parameters

    Directory of Open Access Journals (Sweden)

    Y. V. Kiselev

    2002-06-01

    Full Text Available The monitoring of the underground medium requires estimation of accuracy of the methods used. Numerical simulation of the solution of 2D inverse problem on the reconstruction of seismic and electrical parameters of local (comparable in size with the wavelength inhomogeneities by the diffraction tomography method based upon the first order Born approximation is considered. The direct problems for the Lame and Maxwell equations are solved by the finite difference method that allows us to take correctly into account the diffraction phenomenon produced by the target inhomogeneities with simple and complex geometry. For reconstruction of the local inhomogeneities the algebraic methods and the optimizing procedures are used. The investigation includes a parametric representation of inhomogeneities by the simple and complex functions. The results of estimation of the accuracy of the reconstruction of elastic inhomogeneities and inhomogeneities of electrical conductivity by the diffraction tomography method are represented.

  14. Development of wavelet-ANN models to predict water quality parameters in Hilo Bay, Pacific Ocean.

    Science.gov (United States)

    Alizadeh, Mohamad Javad; Kavianpour, Mohamad Reza

    2015-09-15

    The main objective of this study is to apply artificial neural network (ANN) and wavelet-neural network (WNN) models for predicting a variety of ocean water quality parameters. In this regard, several water quality parameters in Hilo Bay, Pacific Ocean, are taken under consideration. Different combinations of water quality parameters are applied as input variables to predict daily values of salinity, temperature and DO as well as hourly values of DO. The results demonstrate that the WNN models are superior to the ANN models. Also, the hourly models developed for DO prediction outperform the daily models of DO. For the daily models, the most accurate model has R equal to 0.96, while for the hourly model it reaches up to 0.98. Overall, the results show the ability of the model to monitor the ocean parameters, in condition with missing data, or when regular measurement and monitoring are impossible. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. NASA Workshop on Distributed Parameter Modeling and Control of Flexible Aerospace Systems

    Science.gov (United States)

    Marks, Virginia B. (Compiler); Keckler, Claude R. (Compiler)

    1994-01-01

    Although significant advances have been made in modeling and controlling flexible systems, there remains a need for improvements in model accuracy and in control performance. The finite element models of flexible systems are unduly complex and are almost intractable to optimum parameter estimation for refinement using experimental data. Distributed parameter or continuum modeling offers some advantages and some challenges in both modeling and control. Continuum models often result in a significantly reduced number of model parameters, thereby enabling optimum parameter estimation. The dynamic equations of motion of continuum models provide the advantage of allowing the embedding of the control system dynamics, thus forming a complete set of system dynamics. There is also increased insight provided by the continuum model approach.

  16. Risk adjustment model of credit life insurance using a genetic algorithm

    Science.gov (United States)

    Saputra, A.; Sukono; Rusyaman, E.

    2018-03-01

    In managing the risk of credit life insurance, insurance company should acknowledge the character of the risks to predict future losses. Risk characteristics can be learned in a claim distribution model. There are two standard approaches in designing the distribution model of claims over the insurance period i.e, collective risk model and individual risk model. In the collective risk model, the claim arises when risk occurs is called individual claim, accumulation of individual claim during a period of insurance is called an aggregate claim. The aggregate claim model may be formed by large model and a number of individual claims. How the measurement of insurance risk with the premium model approach and whether this approach is appropriate for estimating the potential losses occur in the future. In order to solve the problem Genetic Algorithm with Roulette Wheel Selection is used.

  17. Positioning performance of the NTCM model driven by GPS Klobuchar model parameters

    Science.gov (United States)

    Hoque, Mohammed Mainul; Jakowski, Norbert; Berdermann, Jens

    2018-03-01

    Users of the Global Positioning System (GPS) utilize the Ionospheric Correction Algorithm (ICA) also known as Klobuchar model for correcting ionospheric signal delay or range error. Recently, we developed an ionosphere correction algorithm called NTCM-Klobpar model for single frequency GNSS applications. The model is driven by a parameter computed from GPS Klobuchar model and consecutively can be used instead of the GPS Klobuchar model for ionospheric corrections. In the presented work we compare the positioning solutions obtained using NTCM-Klobpar with those using the Klobuchar model. Our investigation using worldwide ground GPS data from a quiet and a perturbed ionospheric and geomagnetic activity period of 17 days each shows that the 24-hour prediction performance of the NTCM-Klobpar is better than the GPS Klobuchar model in global average. The root mean squared deviation of the 3D position errors are found to be about 0.24 and 0.45 m less for the NTCM-Klobpar compared to the GPS Klobuchar model during quiet and perturbed condition, respectively. The presented algorithm has the potential to continuously improve the accuracy of GPS single frequency mass market devices with only little software modification.

  18. Effects of model schematisation, geometry and parameter values on urban flood modelling.

    Science.gov (United States)

    Vojinovic, Z; Seyoum, S D; Mwalwaka, J M; Price, R K

    2011-01-01

    One-dimensional (1D) hydrodynamic models have been used as a standard industry practice for urban flood modelling work for many years. More recently, however, model formulations have included a 1D representation of the main channels and a 2D representation of the floodplains. Since the physical process of describing exchanges of flows with the floodplains can be represented in different ways, the predictive capability of different modelling approaches can also vary. The present paper explores effects of some of the issues that concern urban flood modelling work. Impacts from applying different model schematisation, geometry and parameter values were investigated. The study has mainly focussed on exploring how different Digital Terrain Model (DTM) resolution, presence of different features on DTM such as roads and building structures and different friction coefficients affect the simulation results. Practical implications of these issues are analysed and illustrated in a case study from St Maarten, N.A. The results from this study aim to provide users of numerical models with information that can be used in the analyses of flooding processes in urban areas.

  19. Water quality modelling for ephemeral rivers: Model development and parameter assessment

    Science.gov (United States)

    Mannina, Giorgio; Viviani, Gaspare

    2010-11-01

    SummaryRiver water quality models can be valuable tools for the assessment and management of receiving water body quality. However, such water quality models require accurate model calibration in order to specify model parameters. Reliable model calibration requires an extensive array of water quality data that are generally rare and resource-intensive, both economically and in terms of human resources, to collect. In the case of small rivers, such data are scarce due to the fact that these rivers are generally considered too insignificant, from a practical and economic viewpoint, to justify the investment of such considerable time and resources. As a consequence, the literature contains very few studies on the water quality modelling for small rivers, and such studies as have been published are fairly limited in scope. In this paper, a simplified river water quality model is presented. The model is an extension of the Streeter-Phelps model and takes into account the physico-chemical and biological processes most relevant to modelling the quality of receiving water bodies (i.e., degradation of dissolved carbonaceous substances, ammonium oxidation, algal uptake and denitrification, dissolved oxygen balance, including depletion by degradation processes and supply by physical reaeration and photosynthetic production). The model has been applied to an Italian case study, the Oreto river (IT), which has been the object of an Italian research project aimed at assessing the river's water quality. For this reason, several monitoring campaigns have been previously carried out in order to collect water quantity and quality data on this river system. In particular, twelve river cross sections were monitored, and both flow and water quality data were collected for each cross section. The results of the calibrated model show satisfactory agreement with the measured data and results reveal important differences between the parameters used to model small rivers as compared to

  20. Assessment of structural model and parameter uncertainty with a multi-model system for soil water balance models

    Science.gov (United States)

    Michalik, Thomas; Multsch, Sebastian; Frede, Hans-Georg; Breuer, Lutz

    2016-04-01

    Water for agriculture is strongly limited in arid and semi-arid regions and often of low quality in terms of salinity. The application of saline waters for irrigation increases the salt load in the rooting zone and has to be managed by leaching to maintain a healthy soil, i.e. to wash out salts by additional irrigation. Dynamic simulation models are helpful tools to calculate the root zone water fluxes and soil salinity content in order to investigate best management practices. However, there is little information on structural and parameter uncertainty for simulations regarding the water and salt balance of saline irrigation. Hence, we established a multi-model system with four different models (AquaCrop, RZWQM, SWAP, Hydrus1D/UNSATCHEM) to analyze the structural and parameter uncertainty by using the Global Likelihood and Uncertainty Estimation (GLUE) method. Hydrus1D/UNSATCHEM and SWAP were set up with multiple sets of different implemented functions (e.g. matric and osmotic stress for root water uptake) which results in a broad range of different model structures. The simulations were evaluated against soil water and salinity content observations. The posterior distribution of the GLUE analysis gives behavioral parameters sets and reveals uncertainty intervals for parameter uncertainty. Throughout all of the model sets, most parameters accounting for the soil water balance show a low uncertainty, only one or two out of five to six parameters in each model set displays a high uncertainty (e.g. pore-size distribution index in SWAP and Hydrus1D/UNSATCHEM). The differences between the models and model setups reveal the structural uncertainty. The highest structural uncertainty is observed for deep percolation fluxes between the model sets of Hydrus1D/UNSATCHEM (~200 mm) and RZWQM (~500 mm) that are more than twice as high for the latter. The model sets show a high variation in uncertainty intervals for deep percolation as well, with an interquartile range (IQR) of

  1. Genetic parameters for weight gain and body measurements for Nile tilapias by random regression modeling

    Directory of Open Access Journals (Sweden)

    Ana Carolina Müller Conti

    2014-10-01

    Full Text Available The aim of the current study was to estimate the genetic parameters for weight gain and body measurements in the GIFT (Genetically Improved Farmed Tilapia strain of Nile tilapia by random regression models. Several orders of Legendre polynomials were tested for random effects and modeled with 1, 3, 6 and 9 classes of residual variance. For the effects of permanent environmental and family, third-order polynomials were adjusted in all traits, as well as for the genetic effects of weight, weight gain, length and width. For genetic effects of height and head, fourth-order polynomials were required. To gain weight, height and head, the best model was one that considered the homogenous residual variance; however for width and weight, heterogeneous variance with 3 and 9 age classes was required, respectively. The highest heritability for weight was 0.34 at 240–311 days, and for weight gain it was 0.69 at 311 days. For head and length, the highest heritability was around 270 days at 0.27 and 0.21, respectively. The highest heritability found for length was 0.20 at 254 days, 0.2 at 254 days for height, and for width the heritability was 0.54 at 311 days. Since the largest heritabilities were found for weight gain and width at 311 days, selection at these ages, based on these traits, would lead to greater genetic gains. Genetic correlations were higher between adjacent ages and, in general, selections at ages of less than 200 days did not lead to genetic gain correlated with traits at 300 days. The exception was for width, because high correlations were obtained between final and initial ages and the heritability was median in the majority of the period. Thus, selection based on the width at any age would lead to satisfactory genetic gain in this trait at the end of the growing season.

  2. An Adjusted Calculation Model of Reduced Heparin Doses in Cardiopulmonary Bypass Surgery in a Chinese Population.

    Science.gov (United States)

    Zhang, Yufeng; Liu, Kai; Li, Wei; Xue, Qian; Hong, Jiang; Xu, Jibin; Wu, Lihui; Ji, Guangyu; Sheng, Jihong; Wang, Zhinong

    2016-10-01

    To investigate the safety and efficacy of an adjusted regimen of heparin infusion in cardiopulmonary bypass (CPB) surgery in a Chinese population. Prospective, single-center, observational study. University teaching hospital. Patients having cardiac surgery with CPB were selected for this study using the following criteria: 18 to 75 years of age, undergoing first-time cardiac surgery with conventional median sternotomy, aortic clamping time between 40 and 120 minutes, and preoperative routine blood tests showing normal liver, renal, and coagulation functions. The exclusion criteria include salvage cases, a history of coagulopathy in the family, and long-term use of anticoagulation or antiplatelet drugs. Sixty patients were divided randomly into a control group (n = 30) receiving a traditional heparin regimen and an experimental group (n = 30) receiving an adjusted regimen. Activated coagulation time (ACT) was monitored at different time points, ACT>480 seconds was set as the safety threshold of CPB. Heparin doses (initial dose, added dose, and total dose), protamine doses (initial dose, added dose, and total dose), CPB time, aortic clamping time, assisted circulation time, sternal closure time, blood transfusion volume, and drainage volume 24 hours after surgery were recorded. There was no significant difference in achieving target ACT after the initial dose of heparin between the 2 groups; CPB time, aortic clamping time, assisted circulation time, postoperative complication rate, and drainage volume between the 2 groups were not significantly different (p>0.05). However, initial and total dosage of heparin, initial and total dosage of protamine, sternal closure time, and intraoperative blood transfusion volume in the experimental group were significantly lower (pChinese CPB patients, which might reduce the initial and total dosage of heparin and protamine as well as sternal closure time and intraoperative blood transfusion volume. Copyright © 2016 Elsevier Inc

  3. Uso de mesa vertical como parâmetro para regulagens de turboatomizadores The use of a vertical patternator as parameter for adjustments of air assited sprayer

    Directory of Open Access Journals (Sweden)

    Renildo L. Mion

    2011-04-01

    Full Text Available O consumo excessivo de agrotóxicos na agricultura brasileira é preocupante, e um dos fatores que contribuem para este excesso é o uso incorreto dos equipamentos de aplicação, causando grandes problemas de contaminação ambiental. O sucesso de uma aplicação agrícola somente é efetivado quando se consegue atingir o alvo com a menor contaminação ambiental possível. O objetivo deste trabalho foi comparar o perfil da distribuição vertical de um turboatomizador com e sem fluxo de ar e o número de bicos utilizados nos ramais, utilizando-se de uma mesa vertical como parâmetro de avaliação. O conjunto utilizado foi um trator marca VALTRA, modelo BM-120 4x2 TODA, e um turboatomizador da marca Jacto, modelo ARBUS 400 GOLDEN, com pontas do tipo J5-2, pressão de 1378 kPa e velocidade do ar de 35 m s-1. O número de bicos não influenciou no perfil de distribuição volumétrico. O fluxo de ar influenciou no perfil de distribuição volumétrico tanto para o lado direito quanto para o lado esquerdo. Os maiores volumes ocorreram abaixo de 1,16 cm com o turboatomizador utilizando ou não o fluxo de ar, com 12 ou 6 pontas.Excessive consumption of pesticides in Brazilian agriculture is of concern, and one of the factors contributing to this surplus is the incorrect use of equipment for its applying, causing severe environmental contamination. The success of agricultural applications is effective only when it manages to hit the target with the lowest possible environmental contamination. The aim of this study was to compare the profile of the vertical distribution of an air assisted sprayer with and without air flow and the number of nozzles used in extensions, using a vertical patternator as the parameter. The set used was a Valtra tractor, model BM-120 4x2 TDA and one Jacto air assisted sprayer jet, model ARBUS 400 GOLDEN, with nozzle J5-2, and pressure of 1378 kPa, with air velocity of 35 m s-1 at the time of the test was conducted. The

  4. Assessing models for parameters of the Ångström-Prescott formula in China

    DEFF Research Database (Denmark)

    Liu, Xiaoying; Xu, Yinlong; Zhong, Xiuli

    2012-01-01

    Application of the Ångström–Prescott (A–P) model, one of the best rated global solar irradiation (Rs) models based on sunshine, is often limited by the lack of model parameters. Increasing the availability of its parameters in the absence of Rs measurement provides an effective way to overcome...... this problem. Although some models relating the A–P parameters to other variables have been developed, they generally lack worldwide validity test. Using data from 80 sites covering three agro-climatic zones in China, we evaluated seven models that relate the parameters to annual average of relative sunshine...... in zone I in predicting Rs, indicating larger errors in humid climates. Since most productive agricultural areas in China are located in zone I, developing parameter models tailored to this zone would be valuable to improve Rs accuracy....

  5. Quantitative assessments of mantle flow models against seismic observations: Influence of uncertainties in mineralogical parameters

    Science.gov (United States)

    Schuberth, Bernhard S. A.

    2017-04-01

    synthetic traveltime data can then be compared - on statistical grounds - to the traveltime variations observed on Earth. Here, we now investigate the influence of uncertainties in the various input parameters that enter our modelling. This is especially important for the material properties at high pressure and high temperature entering the mineralogical models. In particular, this concerns uncertainties that arise from relating measurements in the laboratory to Earth properties on a global scale. As one example, we will address the question on the influence of anelasticity on the variance of global synthetic traveltime residuals. Owing to the differences in seismic frequency content between laboratory measurements (MHz to GHz) and the Earth (mHz to Hz), the seismic velocities given in the mineralogical models need to be adjusted; that is, corrected for dispersion due to anelastic effects. This correction will increase the sensitivity of the seismic velocities to temperature variations. The magnitude of this increase depends on absolute temperature, frequency, the frequency dependence of attenuation and the activation enthalpy of the dissipative process. Especially the latter two are poorly known for mantle minerals and our results indicate that variations in activation enthalpy potentially produce the largest differences in temperature sensitivity with respect to the purely elastic case. We will present new wave propagation simulations and corresponding statistical analyses of traveltime measurements for different synthetic seismic models spanning the possible range of anelastic velocity conversions (while being based on the same mantle circulation model).

  6. Uncertainty of Modal Parameters Estimated by ARMA Models

    DEFF Research Database (Denmark)

    Jensen, Jakob Laigaard; Brincker, Rune; Rytter, Anders

    by a simulation study of a lightly damped single degree of freedom system. Identification by ARMA models has been chosen as system identification method. It is concluded that both the sampling interval and number of sampled points may play a significant role with respect to the statistical errors. Furthermore...

  7. Uncertainty of Modal Parameters Estimated by ARMA Models

    DEFF Research Database (Denmark)

    Jensen, Jacob Laigaard; Brincker, Rune; Rytter, Anders

    1990-01-01

    by simulation study of a lightly damped single degree of freedom system. Identification by ARMA models has been choosen as system identification method. It is concluded that both the sampling interval and number of sampled points may play a significant role with respect to the statistical errors. Furthermore...

  8. Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2010-01-01

    Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficult...

  9. Development of simple kinetic models and parameter estimation for ...

    African Journals Online (AJOL)

    In order to describe and predict the growth and expression of recombinant proteins by using a genetically modified Pichia pastoris, we developed a number of unstructured models based on growth kinetic equation, fed-batch mass balance and the assumptions of constant cell and protein yields. The growth of P. pastoris on ...

  10. Continuum model for masonry: Parameter estimation and validation

    NARCIS (Netherlands)

    Lourenço, P.B.; Rots, J.G.; Blaauwendraad, J.

    1998-01-01

    A novel yield criterion that includes different strengths along each material axis is presented. The criterion includes two different fracture energies in tension and two different fracture energies in compression. The ability of the model to represent the inelastic behavior of orthotropic materials

  11. Land Building Models: Uncertainty in and Sensitivity to Input Parameters

    Science.gov (United States)

    2013-08-01

    Vicksburg, MS: US Army Engineer Research and Development Center. An electronic copy of this CHETN is available from http://chl.erdc.usace.army.mil/chetn...Nourishment Module, Chapter 8. In Coastal Louisiana Ecosystem Assessment and Restoration (CLEAR) Model of Louisiana Coastal Area ( LCA ) Comprehensive

  12. Coupled modelling of underground structures using internal parameters

    Czech Academy of Sciences Publication Activity Database

    Procházka, P.; Trčková, Jiřina; Kuklík, P.; Kalousková, M.

    2004-01-01

    Roč. 1, č. 13 (2004), s. 23-30 ISSN 1214-9691 R&D Projects: GA AV ČR IAA2119001 Institutional research plan: CEZ:AV0Z3046908 Keywords : physical and numerical modelling * stability * tunnel Subject RIV: JM - Building Engineering

  13. Dynamics of 'abc' and 'qd' constant parameters induction generator model

    DEFF Research Database (Denmark)

    Fajardo-R, L.A.; Medina, A.; Iov, F.

    2009-01-01

    In this paper, parametric sensibility effects on dynamics of the induction generator in the presence of local perturbations are investigated. The study is conducted in a 3x2 MW wind park dealing with abc, qd0 and qd reduced order, induction generator model respectively, and with fluxes as state...

  14. Winkler's single-parameter subgrade model from the perspective of ...

    African Journals Online (AJOL)

    ... tensor are taken into consideration, whereas the shear stresses are intentionally dropped with the purpose of providing a useful perspective, with which Winkler's model and its associated coefficient of subgrade reaction can be viewed. The formulation takes into account the variation of the elasticity modulus with depth.

  15. An approach to measure parameter sensitivity in watershed hydrological modelling

    Science.gov (United States)

    Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier for the Little Miami River (LMR) and Las Vegas Wash (LVW) watersheds were used for detail sensitivity analyses. To compare the...

  16. Anatomical parameters for musculoskeletal modeling of the hand and wrist

    NARCIS (Netherlands)

    Mirakhorlo, M. (Mojtaba); Visser, Judith M A; Goislard de Monsabert, B. A A X; van der Helm, F.C.T.; Maas, H.; Veeger, H. E J

    2016-01-01

    A musculoskeletal model of the hand and wrist can provide valuable biomechanical and neurophysiological insights, relevant for clinicians and ergonomists. Currently, no consistent data-set exists comprising the full anatomy of these upper extremity parts. The aim of this study was to collect a

  17. Numerical Modelling of Rubber Vibration Isolators: identification of material parameters

    NARCIS (Netherlands)

    Beijers, C.A.J.; Noordman, Bram; de Boer, Andries; Ivanov, N.I.; Crocker, M.J.

    2004-01-01

    Rubber vibration isolators are used for vibration isolation of engines at high frequencies. To make a good prediction regarding the characteristics of a vibration isolator in the design process, numerical models can be used. However, for a reliable prediction of the dynamic behavior of the isolator,

  18. Ecohydrological model parameter selection for stream health evaluation.

    Science.gov (United States)

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Ross, Dennis M; Zhang, Zhen; Wang, Lizhu; Esfahanian, Abdol-Hossein

    2015-04-01

    Variable selection is a critical step in development of empirical stream health prediction models. This study develops a framework for selecting important in-stream variables to predict four measures of biological integrity: total number of Ephemeroptera, Plecoptera, and Trichoptera (EPT) taxa, family index of biotic integrity (FIBI), Hilsenhoff biotic integrity (HBI), and fish index of biotic integrity (IBI). Over 200 flow regime and water quality variables were calculated using the Hydrologic Index Tool (HIT) and Soil and Water Assessment Tool (SWAT). Streams of the River Raisin watershed in Michigan were grouped using the Strahler stream classification system (orders 1-3 and orders 4-6), k-means clustering technique (two clusters: C1 and C2), and all streams (one grouping). For each grouping, variable selection was performed using Bayesian variable selection, principal component analysis, and Spearman's rank correlation. Following selection of best variable sets, models were developed to predict the measures of biological integrity using adaptive-neuro fuzzy inference systems (ANFIS), a technique well-suited to complex, nonlinear ecological problems. Multiple unique variable sets were identified, all which differed by selection method and stream grouping. Final best models were mostly built using the Bayesian variable selection method. The most effective stream grouping method varied by health measure, although k-means clustering and grouping by stream order were always superior to models built without grouping. Commonly selected variables were related to streamflow magnitude, rate of change, and seasonal nitrate concentration. Each best model was effective in simulating stream health observations, with EPT taxa validation R2 ranging from 0.67 to 0.92, FIBI ranging from 0.49 to 0.85, HBI from 0.56 to 0.75, and fish IBI at 0.99 for all best models. The comprehensive variable selection and modeling process proposed here is a robust method that extends our

  19. Sensitivity analysis of specific activity model parameters for environmental transport of 3H and dose assessment

    International Nuclear Information System (INIS)

    Rout, S.; Mishra, D.G.; Ravi, P.M.; Tripathi, R.M.

    2016-01-01

    Tritium is one of the radionuclides likely to get released to the environment from Pressurized Heavy Water Reactors. Environmental models are extensively used to quantify the complex environmental transport processes of radionuclides and also to assess the impact to the environment. Model parameters exerting the significant influence on model results are identified through a sensitivity analysis (SA). SA is the study of how the variation (uncertainty) in the output of a mathematical model can be apportioned, qualitatively or quantitatively, to different sources of variation in the input parameters. This study was designed to identify the sensitive model parameters of specific activity model (TRS 1616, IAEA) for environmental transfer of 3 H following release to air and then to vegetation and animal products. Model includes parameters such as air to soil transfer factor (CRs), Tissue Free Water 3 H to Organically Bound 3 H ratio (Rp), Relative humidity (RH), WCP (fractional water content) and WEQp (water equivalent factor) any change in these parameters leads to change in 3 H level in vegetation and animal products consequently change in dose due to ingestion. All these parameters are function of climate and/or plant which change with time, space and species. Estimation of these parameters at every time is a time consuming and also required sophisticated instrumentation. Therefore it is necessary to identify the sensitive parameters and freeze the values of least sensitive parameters at constant values for more accurate estimation of 3 H dose in short time for routine assessment

  20. Short description of the BIOS-model, and selection of biosphere parameters to be used in radionuclide transport and dose

    International Nuclear Information System (INIS)

    Jong, E.J. de; Koester, H.W.; Vries, W.J. de.

    1990-02-01

    In the framework of the PACOMA-project (Performance assessment of confinements for medium and alpha waste), initiated by the European Commission, possible future radiation doses, due to contamination of the biosphere by radionuclides originating from radioactive waste disposed in salt-formations, were calculated. In all cases considered radionuclides coming out of the geosphere enter a river. For the biosphere calculations the BIOS-model, developed by the NRPB in England, is used. A short description of the model, as well as of the adjustments made at the RIVM to calculate the total individual and collective doses and the subdoses of different exposure pathways is given. The values of biosphere parameters selected for the model are presented, together with the literature consulted. (author). 17 refs., 3 figs.; 2 tabs