WorldWideScience

Sample records for models estimated relative

  1. Maneuver Estimation Model for Relative Orbit Determination

    National Research Council Canada - National Science Library

    Storch, Tara R

    2005-01-01

    While the use of relative orbit determination has reduced the difficulties inherent in tracking geosynchronous satellites that are in close proximity, the problem is often compounded by stationkeeping...

  2. Parameterizing Dose-Response Models to Estimate Relative Potency Functions Directly

    Science.gov (United States)

    Dinse, Gregg E.

    2012-01-01

    Many comparative analyses of toxicity assume that the potency of a test chemical relative to a reference chemical is constant, but employing such a restrictive assumption uncritically may generate misleading conclusions. Recent efforts to characterize non-constant relative potency rely on relative potency functions and estimate them secondarily after fitting dose-response models for the test and reference chemicals. We study an alternative approach of specifying a relative potency model a priori and estimating it directly using the dose-response data from both chemicals. We consider a power function in dose as a relative potency model and find that it keeps the two chemicals’ dose-response functions within the same family of models for families typically used in toxicology. When differences in the response limits for the test and reference chemicals are attributable to the chemicals themselves, the older two-stage approach is the more convenient. When differences in response limits are attributable to other features of the experimental protocol or when response limits do not differ, the direct approach is straightforward to apply with nonlinear regression methods and simplifies calculation of simultaneous confidence bands. We illustrate the proposed approach using Hill models with dose-response data from U.S. National Toxicology Program bioassays. Though not universally applicable, this method of estimating relative potency functions directly can be profitably applied to a broad family of dose-response models commonly used in toxicology. PMID:22700543

  3. Adequacy of relative and absolute risk models for lifetime risk estimate of radiation-induced cancer

    International Nuclear Information System (INIS)

    McBride, M.; Coldman, A.J.

    1988-03-01

    This report examines the applicability of the relative (multiplicative) and absolute (additive) models in predicting lifetime risk of radiation-induced cancer. A review of the epidemiologic literature, and a discussion of the mathematical models of carcinogenesis and their relationship to these models of lifetime risk, are included. Based on the available data, the relative risk model for the estimation of lifetime risk is preferred for non-sex-specific epithelial tumours. However, because of lack of knowledge concerning other determinants of radiation risk and of background incidence rates, considerable uncertainty in modelling lifetime risk still exists. Therefore, it is essential that follow-up of exposed cohorts be continued so that population-based estimates of lifetime risk are available

  4. Estimating the relative position of risperidone primary binding site in Sera Albumins. Modeling from spectrofluorimetric data

    Science.gov (United States)

    Cortez, Celia Martins; Fragoso, Viviane Muniz S.; Silva, Dilson

    2014-10-01

    In this work, we used a mathematical model to study the interaction of risperidone with human and bovine serum albumins estimating the relative position of the primary binding site, based on the fluorescence quenching theory. Results have shown that the model was able to demonstrate that primary binding site for risperidone in HSA and BSA is very close to the position where is tryptophan 134 of BSA, possibly in domain 1B.

  5. A model to estimate the relative position of sites for ligands in serum albumins

    Science.gov (United States)

    Motta, Art Adriel Emidio de Araújo; Grassini, Maria Carolina Vilela; Cortez, Célia Martins; Silva, Dilson

    2017-11-01

    In this work, we present a mathematical-computational model developed to estimate the relative position of ligand binding sites in HSA and BSA, based on the theory of fluorescence quenching, considering the molecular and spectrofluorimetric differences and similarities between these two albumins. Albumin is the largest and the most abundant serum protein in vertebrates. The ability to bind xenobiotics makes albumin important to the bioavailability and effectiveness of drugs.

  6. Data Sources for the Model-based Small Area Estimates of Cancer-Related Knowledge - Small Area Estimates

    Science.gov (United States)

    The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).

  7. Methodology for the Model-based Small Area Estimates of Cancer-Related Knowledge - Small Area Estimates

    Science.gov (United States)

    The HINTS is designed to produce reliable estimates at the national and regional levels. GIS maps using HINTS data have been used to provide a visual representation of possible geographic relationships in HINTS cancer-related variables.

  8. Relative risk estimation of Chikungunya disease in Malaysia: An analysis based on Poisson-gamma model

    Science.gov (United States)

    Samat, N. A.; Ma'arof, S. H. Mohd Imam

    2015-05-01

    Disease mapping is a method to display the geographical distribution of disease occurrence, which generally involves the usage and interpretation of a map to show the incidence of certain diseases. Relative risk (RR) estimation is one of the most important issues in disease mapping. This paper begins by providing a brief overview of Chikungunya disease. This is followed by a review of the classical model used in disease mapping, based on the standardized morbidity ratio (SMR), which we then apply to our Chikungunya data. We then fit an extension of the classical model, which we refer to as a Poisson-Gamma model, when prior distributions for the relative risks are assumed known. Both results are displayed and compared using maps and we reveal a smoother map with fewer extremes values of estimated relative risk. The extensions of this paper will consider other methods that are relevant to overcome the drawbacks of the existing methods, in order to inform and direct government strategy for monitoring and controlling Chikungunya disease.

  9. Estimating internal exposure risks by the relative risk and the National Institute of Health risk models

    International Nuclear Information System (INIS)

    Mehta, S.K.; Sarangapani, R.

    1995-01-01

    This paper presents tabulations of risk (R) and person-years of life lost (PYLL) for acute exposures of individual organs at ages 20 and 40 yrs for the Indian and Japanese populations to illustrate the effect of age at exposure in the two models. Results are also presented for the organ wise nominal probability coefficients (NPC) and PYLL for individual organs for the age distributed Indian population by the two models. The results presented show that for all organs the estimates of PYLL and NPC for the Indian population are lower than those for the Japanese population by both models except for oesophagus, breast and ovary by the relative risk (RR) model, where the opposite trend is observed. The results also show that the Indian all-cancer values of NPC averaged over the two models is 2.9 x 10 -2 Sv -1 , significantly lower than the world average value of 5x10 -2 Sv -1 estimated by the ICRP. (author). 9 refs., 2 figs., 2 tabs

  10. Estimation of genetic parameters related to eggshell strength using random regression models.

    Science.gov (United States)

    Guo, J; Ma, M; Qu, L; Shen, M; Dou, T; Wang, K

    2015-01-01

    This study examined the changes in eggshell strength and the genetic parameters related to this trait throughout a hen's laying life using random regression. The data were collected from a crossbred population between 2011 and 2014, where the eggshell strength was determined repeatedly for 2260 hens. Using random regression models (RRMs), several Legendre polynomials were employed to estimate the fixed, direct genetic and permanent environment effects. The residual effects were treated as independently distributed with heterogeneous variance for each test week. The direct genetic variance was included with second-order Legendre polynomials and the permanent environment with third-order Legendre polynomials. The heritability of eggshell strength ranged from 0.26 to 0.43, the repeatability ranged between 0.47 and 0.69, and the estimated genetic correlations between test weeks was high at > 0.67. The first eigenvalue of the genetic covariance matrix accounted for about 97% of the sum of all the eigenvalues. The flexibility and statistical power of RRM suggest that this model could be an effective method to improve eggshell quality and to reduce losses due to cracked eggs in a breeding plan.

  11. The switch from relative to absolute phase centre variation model and its impact on coordinate estimates within local engineering networks

    Science.gov (United States)

    Shi, Junbo; Guo, Jiming

    2008-12-01

    The IGS announced the switch from a relative to an absolute phase centre variation model on 5 October 2006 after detailed discussions concerning how the model switch would benefit global and regional networks, as well as the IGS products. However, there was no dedicated study on the major concern of this paper - the influence of the model switch on local engineering networks, especially on coordinate estimates, which are key factors in engineering constructions. The data set considered in this paper is a bridge control network with baselines ranging in length from 200 to 7000 metres, utilising different GPS antenna types. In addition, high correlations between coordinate estimates, antenna phase centre variations and troposphere parameters are also considered. The results demonstrate that the antenna model switch does not produce significant differences to coordinate estimates within local engineering networks.

  12. Variation in estimated ozone-related health impacts of climate change due to modeling choices and assumptions.

    Science.gov (United States)

    Post, Ellen S; Grambsch, Anne; Weaver, Chris; Morefield, Philip; Huang, Jin; Leung, Lai-Yung; Nolte, Christopher G; Adams, Peter; Liang, Xin-Zhong; Zhu, Jin-Hong; Mahoney, Hardee

    2012-11-01

    Future climate change may cause air quality degradation via climate-induced changes in meteorology, atmospheric chemistry, and emissions into the air. Few studies have explicitly modeled the potential relationships between climate change, air quality, and human health, and fewer still have investigated the sensitivity of estimates to the underlying modeling choices. Our goal was to assess the sensitivity of estimated ozone-related human health impacts of climate change to key modeling choices. Our analysis included seven modeling systems in which a climate change model is linked to an air quality model, five population projections, and multiple concentration-response functions. Using the U.S. Environmental Protection Agency's (EPA's) Environmental Benefits Mapping and Analysis Program (BenMAP), we estimated future ozone (O(3))-related health effects in the United States attributable to simulated climate change between the years 2000 and approximately 2050, given each combination of modeling choices. Health effects and concentration-response functions were chosen to match those used in the U.S. EPA's 2008 Regulatory Impact Analysis of the National Ambient Air Quality Standards for O(3). Different combinations of methodological choices produced a range of estimates of national O(3)-related mortality from roughly 600 deaths avoided as a result of climate change to 2,500 deaths attributable to climate change (although the large majority produced increases in mortality). The choice of the climate change and the air quality model reflected the greatest source of uncertainty, with the other modeling choices having lesser but still substantial effects. Our results highlight the need to use an ensemble approach, instead of relying on any one set of modeling choices, to assess the potential risks associated with O(3)-related human health effects resulting from climate change.

  13. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    Science.gov (United States)

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  14. Relative Wave Energy based Adaptive Neuro-Fuzzy Inference System model for the Estimation of Depth of Anaesthesia.

    Science.gov (United States)

    Benzy, V K; Jasmin, E A; Koshy, Rachel Cherian; Amal, Frank; Indiradevi, K P

    2018-01-01

    The advancement in medical research and intelligent modeling techniques has lead to the developments in anaesthesia management. The present study is targeted to estimate the depth of anaesthesia using cognitive signal processing and intelligent modeling techniques. The neurophysiological signal that reflects cognitive state of anaesthetic drugs is the electroencephalogram signal. The information available on electroencephalogram signals during anaesthesia are drawn by extracting relative wave energy features from the anaesthetic electroencephalogram signals. Discrete wavelet transform is used to decomposes the electroencephalogram signals into four levels and then relative wave energy is computed from approximate and detail coefficients of sub-band signals. Relative wave energy is extracted to find out the degree of importance of different electroencephalogram frequency bands associated with different anaesthetic phases awake, induction, maintenance and recovery. The Kruskal-Wallis statistical test is applied on the relative wave energy features to check the discriminating capability of relative wave energy features as awake, light anaesthesia, moderate anaesthesia and deep anaesthesia. A novel depth of anaesthesia index is generated by implementing a Adaptive neuro-fuzzy inference system based fuzzy c-means clustering algorithm which uses relative wave energy features as inputs. Finally, the generated depth of anaesthesia index is compared with a commercially available depth of anaesthesia monitor Bispectral index.

  15. A new modeling approach estimates the relative importance of different community assembly processes

    NARCIS (Netherlands)

    van der Plas, Fons; Janzen, Thijs; Ordonez, Alejandro; Fokkema, Wimke; Reinders, Josephine; Etienne, Rampal S.; Olff, Han

    The relative importance of niche-based (e.g., competitive or stress-based) and stochastic (e.g., random dispersal) processes in structuring ecological communities is frequently analyzed by studying trait distributions of co-occurring species. While filtering processes, such as the exclusion of

  16. Evaluation of alternative age-based methods for estimating relative abundance from survey data in relation to assessment models

    DEFF Research Database (Denmark)

    Berg, Casper Willestofte; Nielsen, Anders; Kristensen, Kasper

    2014-01-01

    Indices of abundance from fishery-independent trawl surveys constitute an important source of information for many fish stock assessments. Indices are often calculated using area stratified sample means on age-disaggregated data, and finally treated in stock assessment models as independent...... observations. We evaluate a series of alternative methods for calculating indices of abundance from trawl survey data (delta-lognormal, delta-gamma, and Tweedie using Generalized Additive Models) as well as different error structures for these indices when used as input in an age-based stock assessment model...... the different indices produced. The stratified mean method is found much more imprecise than the alternatives based on GAMs, which are found to be similar. Having time-varying index variances is found to be of minor importance, whereas the independence assumption is not only violated but has significant impact...

  17. Towards a Model Climatology of Relative Humidity in the Upper Troposphere for Estimation of Contrail and Contrail-Induced Cirrus

    Science.gov (United States)

    Selkirk, Henry B.; Manyin, M.; Ott, L.; Oman, L.; Benson, C.; Pawson, S.; Douglass, A. R.; Stolarski, R. S.

    2011-01-01

    The formation of contrails and contrail cirrus is very sensitive to the relative humidity of the upper troposphere. To reduce uncertainty in an estimate of the radiative impact of aviation-induced cirrus, a model must therefore be able to reproduce the observed background moisture fields with reasonable and quantifiable fidelity. Here we present an upper tropospheric moisture climatology from a 26-year ensemble of simulations using the GEOS CCM. We compare this free-running model's moisture fields to those obtained from the MLS and AIRS satellite instruments, our most comprehensive observational databases for upper tropospheric water vapor. Published comparisons have shown a substantial wet bias in GEOS-5 assimilated fields with respect to MLS water vapor and ice water content. This tendency is clear as well in the GEOS CCM simulations. The GEOS-5 moist physics in the GEOS CCM uses a saturation adjustment that prevents supersaturation, which is unrealistic when compared to in situ moisture observations from MOZAIC aircraft and balloon sondes as we will show. Further, the large-scale satellite datasets also consistently underestimate super-saturation when compared to the in-situ observations. We place these results in the context of estimates of contrail and contrail cirrus frequency.

  18. Spatial analysis of drug-related hospital admissions: an auto-Gaussian model to estimate the hospitalization rates in Italy

    Directory of Open Access Journals (Sweden)

    Emanuela Colasante

    2008-12-01

    Full Text Available

    Introduction: The aim of this study is to evaluate, even if partially, how much the drug use phenomenon impacts on the Italian National Heatlh System throughout the estimation at local level (Local Health Unit of the hospitalization rate caused by substance use and abuse such as opiates, barbiturates-sedativeshypnotics, cocaine and cannabis, and keeping in mind the phenomenon distribution in the space and so the fact that what happens in a specific area depends on what is happening in the neighbourhoods close to it (spatial autocorrelation.

    Methods: Data from hospital discharge database were provided by the Ministry of Health and an auto- Gaussian model was fitted. The spatial trend can be a function of other explanatory variables or can simply be modeled as a function of spatial location. Both models were fitted and compared using the number of subjects kept in charge by Drug Addiction Services and the number of beds held by hospitals as covariates.

    Results: Concerning opiates use related hospitalizations, results show areas where the phenomenon was less prominent in 2001 (Lombardy, part of Liguria, Umbria, part of Latium, Campania, Apulia and Sicily. In the following years, the hospitalization rates increased in some areas, such as the north of Apulia, part of Campania and Latium. A dependence of the opiates related hospitalization rates on the rate of subjects kept in charge by the Drug Addiction Services is highlighted. Concerning barbiturates-sedatives-hypnotics consumption, the best model is the one without covariates and estimated hospitalization rates are lower then 3 per thousand. The model with only the covariate “rate of subjects kept in charge by Drug Addiction Services” has been used both for cocaine and cannabis. In these two cases, more than a half of the Local Health Units report hospitalization rates lower than 0.5 per thousand

  19. Quantifying the Model-Related Variability of Biomass Stock and Change Estimates in the Norwegian National Forest Inventory

    Science.gov (United States)

    Johannes Breidenbach; Clara Antón-Fernández; Hans Petersson; Ronald E. McRoberts; Rasmus Astrup

    2014-01-01

    National Forest Inventories (NFIs) provide estimates of forest parameters for national and regional scales. Many key variables of interest, such as biomass and timber volume, cannot be measured directly in the field. Instead, models are used to predict those variables from measurements of other field variables. Therefore, the uncertainty or variability of NFI estimates...

  20. MODEL-ASSISTED ESTIMATION OF THE GENETIC VARIABILITY IN PHYSIOLOGICAL PARAMETERS RELATED TO TOMATO FRUIT GROWTH UNDER CONTRASTED WATER CONDITIONS

    Directory of Open Access Journals (Sweden)

    Dario Constantinescu

    2016-12-01

    Full Text Available Drought stress is a major abiotic stres threatening plant and crop productivity. In case of fleshy fruits, understanding Drought stress is a major abiotic stress threatening plant and crop productivity. In case of fleshy fruits, understanding mechanisms governing water and carbon accumulations and identifying genes, QTLs and phenotypes, that will enable trade-offs between fruit growth and quality under Water Deficit (WD condition is a crucial challenge for breeders and growers. In the present work, 117 recombinant inbred lines of a population of Solanum lycopersicum were phenotyped under control and WD conditions. Plant water status, fruit growth and composition were measured and data were used to calibrate a process-based model describing water and carbon fluxes in a growing fruit as a function of plant and environment. Eight genotype-dependent model parameters were estimated using a multiobjective evolutionary algorithm in order to minimize the prediction errors of fruit dry and fresh mass throughout fruit development. WD increased the fruit dry matter content (up to 85 % and decreased its fresh weight (up to 60 %, big fruit size genotypes being the most sensitive. The mean normalized root mean squared errors of the predictions ranged between 16-18 % in the population. Variability in model genotypic parameters allowed us to explore diverse genetic strategies in response to WD. An interesting group of genotypes could be discriminated in which i the low loss of fresh mass under WD was associated with high active uptake of sugars and low value of the maximum cell wall extensibility, and ii the high dry matter content in control treatment (C was associated with a slow decrease of mass flow. Using 501 SNP markers genotyped across the genome, a QTL analysis of model parameters allowed to detect three main QTLs related to xylem and phloem conductivities, on chromosomes 2, 4 and 8. The model was then applied to design ideotypes with high dry matter

  1. Estimating uncertainty and its temporal variation related to global climate models in quantifying climate change impacts on hydrology

    Science.gov (United States)

    Shen, Mingxi; Chen, Jie; Zhuan, Meijia; Chen, Hua; Xu, Chong-Yu; Xiong, Lihua

    2018-01-01

    Uncertainty estimation of climate change impacts on hydrology has received much attention in the research community. The choice of a global climate model (GCM) is usually considered as the largest contributor to the uncertainty of climate change impacts. The temporal variation of GCM uncertainty needs to be investigated for making long-term decisions to deal with climate change. Accordingly, this study investigated the temporal variation (mainly long-term) of uncertainty related to the choice of a GCM in predicting climate change impacts on hydrology by using multi-GCMs over multiple continuous future periods. Specifically, twenty CMIP5 GCMs under RCP4.5 and RCP8.5 emission scenarios were adapted to adequately represent this uncertainty envelope, fifty-one 30-year future periods moving from 2021 to 2100 with 1-year interval were produced to express the temporal variation. Future climatic and hydrological regimes over all future periods were compared to those in the reference period (1971-2000) using a set of metrics, including mean and extremes. The periodicity of climatic and hydrological changes and their uncertainty were analyzed using wavelet analysis, while the trend was analyzed using Mann-Kendall trend test and regression analysis. The results showed that both future climate change (precipitation and temperature) and hydrological response predicted by the twenty GCMs were highly uncertain, and the uncertainty increased significantly over time. For example, the change of mean annual precipitation increased from 1.4% in 2021-2050 to 6.5% in 2071-2100 for RCP4.5 in terms of the median value of multi-models, but the projected uncertainty reached 21.7% in 2021-2050 and 25.1% in 2071-2100 for RCP4.5. The uncertainty under a high emission scenario (RCP8.5) was much larger than that under a relatively low emission scenario (RCP4.5). Almost all climatic and hydrological regimes and their uncertainty did not show significant periodicity at the P = .05 significance

  2. Back-extrapolating a land use regression model for estimating past exposures to traffic-related air pollution.

    Science.gov (United States)

    Levy, Ilan; Levin, Noam; Yuval; Schwartz, Joel D; Kark, Jeremy D

    2015-03-17

    Land use regression (LUR) models rely on air pollutant measurements for their development, and are therefore limited to recent periods where such measurements are available. Here we propose an approach to overcome this gap and calculate LUR models several decades before measurements were available. We first developed a LUR model for NOx using annual averages of NOx at all available air quality monitoring sites in Israel between 1991 and 2011 with time as one of the independent variables. We then reconstructed historical spatial data (e.g., road network) from historical topographic maps to apply the model's prediction to each year from 1961 to 2011. The model's predictions were then validated against independent estimates about the national annual NOx emissions from on-road vehicles in a top-down approach. The model's cross validated R2 was 0.74, and the correlation between the model's annual averages and the national annual NOx emissions between 1965 and 2011 was 0.75. Information about the road network and population are persistent predictors in many LUR models. The use of available historical data about these predictors to resolve the spatial variability of air pollutants together with complementary national estimates on the change in pollution levels over time enable historical reconstruction of exposures.

  3. Model linear absolute and relative risk estimates for cancer induced by ionizing radiation in Mexican cohort of occupationally exposed

    International Nuclear Information System (INIS)

    Alvarez, R.J.T.; Trovar, M.V.M; González, J.F.

    2015-01-01

    From the rate of natural mortality m s cancer (t) for every 100 thousand habitants, modeled by a fourth-degree polynomial function of the age data of the Mexican population (2008), and assuming: a) a relationship 1: 5 of cancer induced radiation respect to presented spontaneously, b) a size of initial cohort No = 100 k SOPs, c) a speed of H E = (2 ± 1) mSv / received by the SOPs from 18 to 65 years, d) a latency of 8 years for cancer induction after irradiation, e) a time tracking cohort to 75 years, f) and taking the coefficients absolute and relative risk BEIRs induction of cancer models II and VII (excluding leukemia); It determined: BEIR II for a total of 125 and 400 deaths from cancer for absolute and relative linear models respectively. For BEIR VII has a number of fatal cases of 345 and 927 deaths respectively for absolute and relative linear model cancer. [es

  4. Estimation of environment-related properties of chemicals for design of sustainable processes: development of group-contribution+ (GC+) property models and uncertainty analysis.

    Science.gov (United States)

    Hukkerikar, Amol Shivajirao; Kalakul, Sawitree; Sarup, Bent; Young, Douglas M; Sin, Gürkan; Gani, Rafiqul

    2012-11-26

    The aim of this work is to develop group-contribution(+) (GC(+)) method (combined group-contribution (GC) method and atom connectivity index (CI) method) based property models to provide reliable estimations of environment-related properties of organic chemicals together with uncertainties of estimated property values. For this purpose, a systematic methodology for property modeling and uncertainty analysis is used. The methodology includes a parameter estimation step to determine parameters of property models and an uncertainty analysis step to establish statistical information about the quality of parameter estimation, such as the parameter covariance, the standard errors in predicted properties, and the confidence intervals. For parameter estimation, large data sets of experimentally measured property values of a wide range of chemicals (hydrocarbons, oxygenated chemicals, nitrogenated chemicals, poly functional chemicals, etc.) taken from the database of the US Environmental Protection Agency (EPA) and from the database of USEtox is used. For property modeling and uncertainty analysis, the Marrero and Gani GC method and atom connectivity index method have been considered. In total, 22 environment-related properties, which include the fathead minnow 96-h LC(50), Daphnia magna 48-h LC(50), oral rat LD(50), aqueous solubility, bioconcentration factor, permissible exposure limit (OSHA-TWA), photochemical oxidation potential, global warming potential, ozone depletion potential, acidification potential, emission to urban air (carcinogenic and noncarcinogenic), emission to continental rural air (carcinogenic and noncarcinogenic), emission to continental fresh water (carcinogenic and noncarcinogenic), emission to continental seawater (carcinogenic and noncarcinogenic), emission to continental natural soil (carcinogenic and noncarcinogenic), and emission to continental agricultural soil (carcinogenic and noncarcinogenic) have been modeled and analyzed. The application

  5. Estimation of environment-related properties of chemicals for design of sustainable processes: Development of group-contribution+ (GC+) models and uncertainty analysis

    DEFF Research Database (Denmark)

    Hukkerikar, Amol; Kalakul, Sawitree; Sarup, Bent

    2012-01-01

    The aim of this work is to develop group-3 contribution+ (GC+)method (combined group-contribution (GC) method and atom connectivity index (CI)) based 15 property models to provide reliable estimations of environment-related properties of organic chemicals together with uncertainties of estimated...... of parameter estimation, such as the parameter covariance, the standard errors in predicted properties, and the confidence intervals. For parameter estimation, large data sets of experimentally measured property values of a wide range of chemicals (hydrocarbons, oxygenated chemicals, nitrogenated chemicals......, poly functional chemicals, etc.) taken from the database of the US Environmental Protection Agency (EPA) and from the database of USEtox is used. For property modeling and uncertainty analysis, the Marrero and Gani GC method and atom connectivity index method have been considered. In total, 22...

  6. Model Related Estimates of time dependent quantiles of peak flows - case study for selected catchments in Poland

    Science.gov (United States)

    Strupczewski, Witold G.; Bogdanowich, Ewa; Debele, Sisay

    2016-04-01

    Under Polish climate conditions the series of Annual Maxima (AM) flows are usually a mixture of peak flows of thaw- and rainfall- originated floods. The northern, lowland regions are dominated by snowmelt floods whilst in mountainous regions the proportion of rainfall floods is predominant. In many stations the majority of AM can be of snowmelt origin, but the greatest peak flows come from rainfall floods or vice versa. In a warming climate, precipitation is less likely to occur as snowfall. A shift from a snow- towards a rain-dominated regime results in a decreasing trend in mean and standard deviations of winter peak flows whilst rainfall floods do not exhibit any trace of non-stationarity. That is why a simple form of trends (i.e. linear trends) are more difficult to identify in AM time-series than in Seasonal Maxima (SM), usually winter season time-series. Hence it is recommended to analyse trends in SM, where a trend in standard deviation strongly influences the time -dependent upper quantiles. The uncertainty associated with the extrapolation of the trend makes it necessary to apply a relationship for trend which has time derivative tending to zero, e.g. we can assume a new climate equilibrium epoch approaching, or a time horizon is limited by the validity of the trend model. For both winter and summer SM time series, at least three distributions functions with trend model in the location, scale and shape parameters are estimated by means of the GAMLSS package using the ML-techniques. The resulting trend estimates in mean and standard deviation are mutually compared to the observed trends. Then, using AIC measures as weights, a multi-model distribution is constructed for each of two seasons separately. Further, assuming a mutual independence of the seasonal maxima, an AM model with time-dependent parameters can be obtained. The use of a multi-model approach can alleviate the effects of different and often contradictory trends obtained by using and identifying

  7. A Bayesian model averaging approach for estimating the relative risk of mortality associated with heat waves in 105 U.S. cities.

    Science.gov (United States)

    Bobb, Jennifer F; Dominici, Francesca; Peng, Roger D

    2011-12-01

    Estimating the risks heat waves pose to human health is a critical part of assessing the future impact of climate change. In this article, we propose a flexible class of time series models to estimate the relative risk of mortality associated with heat waves and conduct Bayesian model averaging (BMA) to account for the multiplicity of potential models. Applying these methods to data from 105 U.S. cities for the period 1987-2005, we identify those cities having a high posterior probability of increased mortality risk during heat waves, examine the heterogeneity of the posterior distributions of mortality risk across cities, assess sensitivity of the results to the selection of prior distributions, and compare our BMA results to a model selection approach. Our results show that no single model best predicts risk across the majority of cities, and that for some cities heat-wave risk estimation is sensitive to model choice. Although model averaging leads to posterior distributions with increased variance as compared to statistical inference conditional on a model obtained through model selection, we find that the posterior mean of heat wave mortality risk is robust to accounting for model uncertainty over a broad class of models. © 2011, The International Biometric Society.

  8. Methods of statistical model estimation

    CERN Document Server

    Hilbe, Joseph

    2013-01-01

    Methods of Statistical Model Estimation examines the most important and popular methods used to estimate parameters for statistical models and provide informative model summary statistics. Designed for R users, the book is also ideal for anyone wanting to better understand the algorithms used for statistical model fitting. The text presents algorithms for the estimation of a variety of regression procedures using maximum likelihood estimation, iteratively reweighted least squares regression, the EM algorithm, and MCMC sampling. Fully developed, working R code is constructed for each method. Th

  9. Empirical model for estimating dengue incidence using temperature, rainfall, and relative humidity: a 19-year retrospective analysis in East Delhi.

    Science.gov (United States)

    Ramachandran, Vishnampettai G; Roy, Priyamvada; Das, Shukla; Mogha, Narendra Singh; Bansal, Ajay Kumar

    2016-01-01

    Aedes mosquitoes are responsible for transmitting the dengue virus. The mosquito lifecycle is known to be influenced by temperature, rainfall, and relative humidity. This retrospective study was planned to investigate whether climatic factors could be used to predict the occurrence of dengue in East Delhi. The number of monthly dengue cases reported over 19 years was obtained from the laboratory records of our institution. Monthly data of rainfall, temperature, and humidity collected from a local weather station were correlated with the number of monthly reported dengue cases. One-way analysis of variance was used to analyse whether the climatic parameters differed significantly among seasons. Four models were developed using negative binomial generalized linear model analysis. Monthly rainfall, temperature, humidity, were used as independent variables, and the number of dengue cases reported monthly was used as the dependent variable. The first model considered data from the same month, while the other three models involved incorporating data with a lag phase of 1, 2, and 3 months, respectively. The greatest number of cases was reported during the post-monsoon period each year. Temperature, rainfall, and humidity varied significantly across the pre-monsoon, monsoon, and post-monsoon periods. The best correlation between these three climatic factors and dengue occurrence was at a time lag of 2 months. This study found that temperature, rainfall, and relative humidity significantly affected dengue occurrence in East Delhi. This weather-based dengue empirical model can forecast potential outbreaks 2-month in advance, providing an early warning system for intensifying dengue control measures.

  10. Model for traffic emissions estimation

    Science.gov (United States)

    Alexopoulos, A.; Assimacopoulos, D.; Mitsoulis, E.

    A model is developed for the spatial and temporal evaluation of traffic emissions in metropolitan areas based on sparse measurements. All traffic data available are fully employed and the pollutant emissions are determined with the highest precision possible. The main roads are regarded as line sources of constant traffic parameters in the time interval considered. The method is flexible and allows for the estimation of distributed small traffic sources (non-line/area sources). The emissions from the latter are assumed to be proportional to the local population density as well as to the traffic density leading to local main arteries. The contribution of moving vehicles to air pollution in the Greater Athens Area for the period 1986-1988 is analyzed using the proposed model. Emissions and other related parameters are evaluated. Emissions from area sources were found to have a noticeable share of the overall air pollution.

  11. Exposure to traffic-related air pollution during pregnancy and term low birth weight: estimation of causal associations in a semiparametric model.

    Science.gov (United States)

    Padula, Amy M; Mortimer, Kathleen; Hubbard, Alan; Lurmann, Frederick; Jerrett, Michael; Tager, Ira B

    2012-11-01

    Traffic-related air pollution is recognized as an important contributor to health problems. Epidemiologic analyses suggest that prenatal exposure to traffic-related air pollutants may be associated with adverse birth outcomes; however, there is insufficient evidence to conclude that the relation is causal. The Study of Air Pollution, Genetics and Early Life Events comprises all births to women living in 4 counties in California's San Joaquin Valley during the years 2000-2006. The probability of low birth weight among full-term infants in the population was estimated using machine learning and targeted maximum likelihood estimation for each quartile of traffic exposure during pregnancy. If everyone lived near high-volume freeways (approximated as the fourth quartile of traffic density), the estimated probability of term low birth weight would be 2.27% (95% confidence interval: 2.16, 2.38) as compared with 2.02% (95% confidence interval: 1.90, 2.12) if everyone lived near smaller local roads (first quartile of traffic density). Assessment of potentially causal associations, in the absence of arbitrary model assumptions applied to the data, should result in relatively unbiased estimates. The current results support findings from previous studies that prenatal exposure to traffic-related air pollution may adversely affect birth weight among full-term infants.

  12. Age-related change in Wechsler IQ norms after adjustment for the Flynn effect: estimates from three computational models.

    Science.gov (United States)

    Agbayani, Kristina A; Hiscock, Merrill

    2013-01-01

    A previous study found that the Flynn effect accounts for 85% of the normative difference between 20- and 70-year-olds on subtests of the Wechsler intelligence tests. Adjusting scores for the Flynn effect substantially reduces normative age-group differences, but the appropriate amount of adjustment is uncertain. The present study replicates previous findings and employs two other methods of adjusting for the Flynn effect. Averaged across models, results indicate that the Flynn effect accounts for 76% of normative age-group differences on Wechsler IQ subtests. Flynn-effect adjustment reduces the normative age-related decline in IQ from 4.3 to 1.1 IQ points per decade.

  13. Maneuver Estimation Model for Geostationary Orbit Determination

    National Research Council Canada - National Science Library

    Hirsch, Brian J

    2006-01-01

    .... The Clohessy-Wiltshire equations were used to model the relative motion of a geostationary satellite about its intended location, and a nonlinear least squares algorithm was developed to estimate the satellite trajectories.

  14. Asymptotic Optimality of Estimating Function Estimator for CHARN Model

    Directory of Open Access Journals (Sweden)

    Tomoyuki Amano

    2012-01-01

    Full Text Available CHARN model is a famous and important model in the finance, which includes many financial time series models and can be assumed as the return processes of assets. One of the most fundamental estimators for financial time series models is the conditional least squares (CL estimator. However, recently, it was shown that the optimal estimating function estimator (G estimator is better than CL estimator for some time series models in the sense of efficiency. In this paper, we examine efficiencies of CL and G estimators for CHARN model and derive the condition that G estimator is asymptotically optimal.

  15. Cancer Related-Knowledge - Small Area Estimates

    Science.gov (United States)

    These model-based estimates are produced using statistical models that combine data from the Health Information National Trends Survey, and auxiliary variables obtained from relevant sources and borrow strength from other areas with similar characteristics.

  16. RELATIVE CAMERA POSE ESTIMATION METHOD USING OPTIMIZATION ON THE MANIFOLD

    Directory of Open Access Journals (Sweden)

    C. Cheng

    2017-05-01

    Full Text Available To solve the problem of relative camera pose estimation, a method using optimization with respect to the manifold is proposed. Firstly from maximum-a-posteriori (MAP model to nonlinear least squares (NLS model, the general state estimation model using optimization is derived. Then the camera pose estimation model is applied to the general state estimation model, while the parameterization of rigid body transformation is represented by Lie group/algebra. The jacobian of point-pose model with respect to Lie group/algebra is derived in detail and thus the optimization model of rigid body transformation is established. Experimental results show that compared with the original algorithms, the approaches with optimization can obtain higher accuracy both in rotation and translation, while avoiding the singularity of Euler angle parameterization of rotation. Thus the proposed method can estimate relative camera pose with high accuracy and robustness.

  17. A Simple Plasma Retinol Isotope Ratio Method for Estimating β-Carotene Relative Bioefficacy in Humans: Validation with the Use of Model-Based Compartmental Analysis.

    Science.gov (United States)

    Ford, Jennifer Lynn; Green, Joanne Balmer; Lietz, Georg; Oxley, Anthony; Green, Michael H

    2017-09-01

    Background: Provitamin A carotenoids are an important source of dietary vitamin A for many populations. Thus, accurate and simple methods for estimating carotenoid bioefficacy are needed to evaluate the vitamin A value of test solutions and plant sources. β-Carotene bioefficacy is often estimated from the ratio of the areas under plasma isotope response curves after subjects ingest labeled β-carotene and a labeled retinyl acetate reference dose [isotope reference method (IRM)], but to our knowledge, the method has not yet been evaluated for accuracy. Objectives: Our objectives were to develop and test a physiologically based compartmental model that includes both absorptive and postabsorptive β-carotene bioconversion and to use the model to evaluate the accuracy of the IRM and a simple plasma retinol isotope ratio [(RIR), labeled β-carotene-derived retinol/labeled reference-dose-derived retinol in one plasma sample] for estimating relative bioefficacy. Methods: We used model-based compartmental analysis (Simulation, Analysis and Modeling software) to develop and apply a model that provided known values for β-carotene bioefficacy. Theoretical data for 10 subjects were generated by the model and used to determine bioefficacy by RIR and IRM; predictions were compared with known values. We also applied RIR and IRM to previously published data. Results: Plasma RIR accurately predicted β-carotene relative bioefficacy at 14 d or later. IRM also accurately predicted bioefficacy by 14 d, except that, when there was substantial postabsorptive bioconversion, IRM underestimated bioefficacy. Based on our model, 1-d predictions of relative bioefficacy include absorptive plus a portion of early postabsorptive conversion. Conclusion: The plasma RIR is a simple tracer method that accurately predicts β-carotene relative bioefficacy based on analysis of one blood sample obtained at ≥14 d after co-ingestion of labeled β-carotene and retinyl acetate. The method also provides

  18. Estimating Parameters Related to the Lifespan of Passively Transferred and Vaccine-Induced Porcine Reproductive and Respiratory Syndrome Virus Type I Antibodies by Modeling Field Data

    Directory of Open Access Journals (Sweden)

    Mathieu Andraud

    2018-01-01

    Full Text Available The outputs of epidemiological models are strongly related to the structure of the model and input parameters. The latter are defined by fitting theoretical concepts to actual data derived from field or experimental studies. However, some parameters may remain difficult to estimate and are subject to uncertainty or sensitivity analyses to determine their variation range and their global impact on model outcomes. As such, the evaluation of immunity duration is often a puzzling issue requiring long-term follow-up data that are, most of time, not available. The present analysis aims at characterizing the kinetics of antibodies against Porcine Reproductive and Respiratory Syndrome virus (PRRSv from longitudinal data sets. The first data set consisted in the serological follow-up of 22 vaccinated gilts during 21 weeks post-vaccination (PV. The second one gathered the maternally derived antibodies (MDAs kinetics in piglets from three different farms up to 14 weeks of age. The peak of the PV serological response against PRRSv was reached 6.9 weeks PV on average with an average duration of antibodies persistence of 26.5 weeks. In the monitored cohort of piglets, the duration of passive immunity was found relatively short, with an average duration of 4.8 weeks. The level of PRRSv-MDAs was found correlated with the dams’ antibody titer at birth, and the antibody persistence was strongly related to the initial MDAs titers in piglets. These results evidenced the importance of PRRSv vaccination schedule in sows, to optimize the delivery of antibodies to suckling piglets. These estimates of the duration of active and passive immunity could be further used as input parameters of epidemiological models to analyze their impact on the persistence of PRRSv within farms.

  19. Estimating Parameters Related to the Lifespan of Passively Transferred and Vaccine-Induced Porcine Reproductive and Respiratory Syndrome Virus Type I Antibodies by Modeling Field Data.

    Science.gov (United States)

    Andraud, Mathieu; Fablet, Christelle; Renson, Patricia; Eono, Florent; Mahé, Sophie; Bourry, Olivier; Rose, Nicolas

    2018-01-01

    The outputs of epidemiological models are strongly related to the structure of the model and input parameters. The latter are defined by fitting theoretical concepts to actual data derived from field or experimental studies. However, some parameters may remain difficult to estimate and are subject to uncertainty or sensitivity analyses to determine their variation range and their global impact on model outcomes. As such, the evaluation of immunity duration is often a puzzling issue requiring long-term follow-up data that are, most of time, not available. The present analysis aims at characterizing the kinetics of antibodies against Porcine Reproductive and Respiratory Syndrome virus (PRRSv) from longitudinal data sets. The first data set consisted in the serological follow-up of 22 vaccinated gilts during 21 weeks post-vaccination (PV). The second one gathered the maternally derived antibodies (MDAs) kinetics in piglets from three different farms up to 14 weeks of age. The peak of the PV serological response against PRRSv was reached 6.9 weeks PV on average with an average duration of antibodies persistence of 26.5 weeks. In the monitored cohort of piglets, the duration of passive immunity was found relatively short, with an average duration of 4.8 weeks. The level of PRRSv-MDAs was found correlated with the dams' antibody titer at birth, and the antibody persistence was strongly related to the initial MDAs titers in piglets. These results evidenced the importance of PRRSv vaccination schedule in sows, to optimize the delivery of antibodies to suckling piglets. These estimates of the duration of active and passive immunity could be further used as input parameters of epidemiological models to analyze their impact on the persistence of PRRSv within farms.

  20. Estimating Parameters Related to the Lifespan of Passively Transferred and Vaccine-Induced Porcine Reproductive and Respiratory Syndrome Virus Type I Antibodies by Modeling Field Data

    Science.gov (United States)

    Andraud, Mathieu; Fablet, Christelle; Renson, Patricia; Eono, Florent; Mahé, Sophie; Bourry, Olivier; Rose, Nicolas

    2018-01-01

    The outputs of epidemiological models are strongly related to the structure of the model and input parameters. The latter are defined by fitting theoretical concepts to actual data derived from field or experimental studies. However, some parameters may remain difficult to estimate and are subject to uncertainty or sensitivity analyses to determine their variation range and their global impact on model outcomes. As such, the evaluation of immunity duration is often a puzzling issue requiring long-term follow-up data that are, most of time, not available. The present analysis aims at characterizing the kinetics of antibodies against Porcine Reproductive and Respiratory Syndrome virus (PRRSv) from longitudinal data sets. The first data set consisted in the serological follow-up of 22 vaccinated gilts during 21 weeks post-vaccination (PV). The second one gathered the maternally derived antibodies (MDAs) kinetics in piglets from three different farms up to 14 weeks of age. The peak of the PV serological response against PRRSv was reached 6.9 weeks PV on average with an average duration of antibodies persistence of 26.5 weeks. In the monitored cohort of piglets, the duration of passive immunity was found relatively short, with an average duration of 4.8 weeks. The level of PRRSv-MDAs was found correlated with the dams’ antibody titer at birth, and the antibody persistence was strongly related to the initial MDAs titers in piglets. These results evidenced the importance of PRRSv vaccination schedule in sows, to optimize the delivery of antibodies to suckling piglets. These estimates of the duration of active and passive immunity could be further used as input parameters of epidemiological models to analyze their impact on the persistence of PRRSv within farms. PMID:29435455

  1. Relative Pose Estimation Algorithm with Gyroscope Sensor

    Directory of Open Access Journals (Sweden)

    Shanshan Wei

    2016-01-01

    Full Text Available This paper proposes a novel vision and inertial fusion algorithm S2fM (Simplified Structure from Motion for camera relative pose estimation. Different from current existing algorithms, our algorithm estimates rotation parameter and translation parameter separately. S2fM employs gyroscopes to estimate camera rotation parameter, which is later fused with the image data to estimate camera translation parameter. Our contributions are in two aspects. (1 Under the circumstance that no inertial sensor can estimate accurately enough translation parameter, we propose a translation estimation algorithm by fusing gyroscope sensor and image data. (2 Our S2fM algorithm is efficient and suitable for smart devices. Experimental results validate efficiency of the proposed S2fM algorithm.

  2. Applicability of two-step models in estimating nitrification kinetics from batch respirograms under different relative dynamics of ammonia and nitrite oxidation.

    Science.gov (United States)

    Chandran, K; Smets, B F

    2000-10-05

    A mechanistically based nitrification model was formulated to facilitate determination of both NH(4)(+)-N to NO(2)(-)-N and NO(2)(-)-N to NO(3)(-)-N oxidation kinetics from a single NH(4)(+)-N to NO(3)(-)-N batch-oxidation profile by explicitly considering the kinetics of each oxidation step. The developed model incorporated a novel convention for expressing the concentrations of nitrogen species in terms of their nitrogenous oxygen demand (NOD). Stoichiometric coefficients relating nitrogen removal, oxygen uptake, and biomass synthesis were derived from an electron-balanced equation.%A parameter identifiability analysis of the developed two-step model revealed a decrease in correlation and an increase in the precision of the kinetic parameter estimates when NO(2)(-)-N oxidation kinetics became increasingly rate-limiting. These findings demonstrate that two-step models describe nitrification kinetics adequately only when NH(4)(+)-N to NO(3)(-)-N oxidation profiles contain sufficient information pertaining to both nitrification steps. Thus, the rate-determining step in overall nitrification must be identified before applying conventionally used models to describe batch nitrification respirograms.

  3. Uncertainty relations for approximation and estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jaeha, E-mail: jlee@post.kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Tsutsui, Izumi, E-mail: izumi.tsutsui@kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Theory Center, Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan)

    2016-05-27

    We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér–Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position–momentum and the time–energy relations in one framework albeit handled differently. - Highlights: • Several inequalities interpreted as uncertainty relations for approximation/estimation are derived from a single ‘versatile inequality’. • The ‘versatile inequality’ sets a limit on the approximation of an observable and/or the estimation of a parameter by another observable. • The ‘versatile inequality’ turns into an elaboration of the Robertson–Kennard (Schrödinger) inequality and the Cramér–Rao inequality. • Both the position–momentum and the time–energy relation are treated in one framework. • In every case, Aharonov's weak value arises as a key geometrical ingredient, deciding the optimal choice for the proxy functions.

  4. Uncertainty relations for approximation and estimation

    International Nuclear Information System (INIS)

    Lee, Jaeha; Tsutsui, Izumi

    2016-01-01

    We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér–Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position–momentum and the time–energy relations in one framework albeit handled differently. - Highlights: • Several inequalities interpreted as uncertainty relations for approximation/estimation are derived from a single ‘versatile inequality’. • The ‘versatile inequality’ sets a limit on the approximation of an observable and/or the estimation of a parameter by another observable. • The ‘versatile inequality’ turns into an elaboration of the Robertson–Kennard (Schrödinger) inequality and the Cramér–Rao inequality. • Both the position–momentum and the time–energy relation are treated in one framework. • In every case, Aharonov's weak value arises as a key geometrical ingredient, deciding the optimal choice for the proxy functions.

  5. A county-level estimate of PM2.5 related chronic mortality risk in China based on multi-model exposure data

    Science.gov (United States)

    Wang, Qing; Wang, Jiaonan; He, Mike Z.; Kinney, Patrick L.; Li, Tiantian

    2017-01-01

    Background Ambient fine particulate matter (PM2.5) pollution is currently a serious environmental problem in China, but evidence of health effects with higher resolution and spatial coverage is insufficient. Objective This study aims to provide a better overall understanding of long-term mortality effects of PM2.5 pollution in China and a county-level spatial map for estimating PM2.5 related premature deaths of the entire country. Method Using four sets of satellite-derived PM2.5 concentration data and the integrated exposure-response model which has been employed by the Global Burden of Disease (GBD) to estimate global mortality of ambient and household air pollution in 2010, we estimated PM2.5 related premature mortality for five endpoints across China in 2010. Result Premature deaths attributed to PM2.5 nationwide amounted to 1.27 million in total, and 119,167, 83,976, 390,266, 670,906 for adult chronic obstructive pulmonary disease, lung cancer, ischemic heart disease, and stroke, respectively; 3995 deaths for acute lower respiratory infections were estimated in children under the age of 5. About half of the premature deaths were from counties with annual average PM2.5 concentrations above 63.61 μg/m3, which cover 16.97% of the Chinese territory. These counties were largely located in the Beijing-Tianjin-Hebei region and the North China Plain. High population density and high pollution areas exhibited the highest health risks attributed to air pollution. On a per capita basis, the highest values were mostly located in heavily polluted industrial regions. Conclusion PM2.5-attributable health risk is closely associated with high population density and high levels of pollution in China. Further estimates using long-term historical exposure data and concentration-response (C-R) relationships should be completed in the future to investigate longer-term trends in the effects of PM2.5. PMID:29097050

  6. Closed-Loop Surface Related Multiple Estimation

    NARCIS (Netherlands)

    Lopez Angarita, G.A.

    2016-01-01

    Surface-related multiple elimination (SRME) is one of the most commonly used methods for suppressing surface multiples. However, in order to obtain an accurate surface multiple estimation, dense source and receiver sampling is required. The traditional approach to this problem is performing data

  7. Estimated incidence of cardiovascular complications related to type 2 diabetes in Mexico using the UKPDS outcome model and a population-based survey.

    Science.gov (United States)

    Reynoso-Noverón, Nancy; Mehta, Roopa; Almeda-Valdes, Paloma; Rojas-Martinez, Rosalba; Villalpando, Salvador; Hernández-Ávila, Mauricio; Aguilar-Salinas, Carlos A

    2011-01-07

    To estimate the incidence of complications, life expectancy and diabetes related mortality in the Mexican diabetic population over the next two decades using data from a nation-wide, population based survey and the United Kingdom Prospective Diabetes Study (UKPDS) outcome model. The cohort included all patients with type 2 diabetes evaluated during the National Health and Nutrition Survey (ENSANut) 2006. ENSANut is a probabilistic multistage stratified survey whose aim was to measure the prevalence of chronic diseases. A total of 47,152 households were visited. Results are shown stratified by gender, time since diagnosis (> or ≤ to 10 years) and age at the time of diagnosis (> or ≤ 40 years). The prevalence of diabetes in our cohort was 14.4%. The predicted 20 year-incidence for chronic complications per 1000 individuals are: ischemic heart disease 112, myocardial infarction 260, heart failure 113, stroke 101, and amputation 62. Furthermore, 539 per 1000 patients will have a diabetes-related premature death. The average life expectancy for the diabetic population is 10.9 years (95%CI 10.7-11.2); this decreases to 8.3 years after adjusting for quality of life (CI95% 8.1-8.5). Male sex and cases diagnosed after age 40 have the highest risk for developing at least one major complication during the next 20 years. Based on the current clinical profile of Mexican patients with diabetes, the burden of disease related complications will be tremendous over the next two decades.

  8. Parameter Estimation for Thurstone Choice Models

    Energy Technology Data Exchange (ETDEWEB)

    Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-24

    We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.

  9. A Bayesian Model and Stochastic Exposure (Dose) Estimation for Relative Exposure Risk Comparison Involving Asbestos-Containing Dropped Ceiling Panel Installation and Maintenance Tasks.

    Science.gov (United States)

    Boelter, Fred W; Xia, Yulin; Persky, Jacob D

    2017-09-01

    Assessing exposures to hazards in order to characterize risk is at the core of occupational hygiene. Our study examined dropped ceiling systems commonly used in schools and commercial buildings and lay-in ceiling panels that may have contained asbestos prior to the mid to late 1970s. However, most ceiling panels and tiles do not contain asbestos. Since asbestos risk relates to dose, we estimated the distribution of eight-hour TWA concentrations and one-year exposures (a one-year dose equivalent) to asbestos fibers (asbestos f/cc-years) for five groups of workers who may encounter dropped ceilings: specialists, generalists, maintenance workers, nonprofessional do-it-yourself (DIY) persons, and other tradespersons who are bystanders to ceiling work. Concentration data (asbestos f/cc) were obtained through two exposure assessment studies in the field and one chamber study. Bayesian and stochastic models were applied to estimate distributions of eight-hour TWAs and annual exposures (dose). The eight-hour TWAs for all work categories were below current and historic occupational exposure limits (OELs). Exposures to asbestos fibers from dropped ceiling work would be categorized as "highly controlled" for maintenance workers and "well controlled" for remaining work categories, according to the American Industrial Hygiene Association exposure control rating system. Annual exposures (dose) were found to be greatest for specialists, followed by maintenance workers, generalists, bystanders, and DIY. On a comparative basis, modeled dose and thus risk from dropped ceilings for all work categories were orders of magnitude lower than published exposures for other sources of banned friable asbestos-containing building material commonly encountered in construction trades. © 2016 The Authors Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.

  10. Estimation of carrying capacity of the Gulf of Kachchh, west coast of India in relation to petroleum hydrocarbon through oil spill modeling

    Digital Repository Service at National Institute of Oceanography (India)

    Vethamony, P.; Babu, M.T.; Reddy, G.S.; Sudheesh, K.; Desa, E.; Zingde, M.D.

    devised to estimate CC using a coupled 2D hydrodynamic - oil spill model. The model was run to assess the dissolution of the oil for various meteorological and oceanographic conditions and oil characteristics. In the case of operational discharge from a...

  11. Blind estimation of a ship's relative wave heading

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam; Iseki, Toshio

    2012-01-01

    This article proposes a method to estimate a ship’s relative heading against the waves. The procedure relies purely on ship- board measurements of global responses such as motion components, accelerations and the bending moment amidships. There is no particular (mathematical) model connected to t...... to the estimate, and therefore it is called a ’blind estimate’. The approach is in this introductory study tested by analysing simulated data. The analysis reveals that it is possible to estimate a ship’s relative heading on the basis of shipboard measurements only....

  12. Estimated incidence of cardiovascular complications related to type 2 diabetes in Mexico using the UKPDS outcome model and a population-based survey

    Directory of Open Access Journals (Sweden)

    Aguilar-Salinas Carlos A

    2011-01-01

    Full Text Available Abstract Background To estimate the incidence of complications, life expectancy and diabetes related mortality in the Mexican diabetic population over the next two decades using data from a nation-wide, population based survey and the United Kingdom Prospective Diabetes Study (UKPDS outcome model Methods The cohort included all patients with type 2 diabetes evaluated during the National Health and Nutrition Survey (ENSANut 2006. ENSANut is a probabilistic multistage stratified survey whose aim was to measure the prevalence of chronic diseases. A total of 47,152 households were visited. Results are shown stratified by gender, time since diagnosis (> or ≤ to 10 years and age at the time of diagnosis (> or ≤ 40 years. Results The prevalence of diabetes in our cohort was 14.4%. The predicted 20 year-incidence for chronic complications per 1000 individuals are: ischemic heart disease 112, myocardial infarction 260, heart failure 113, stroke 101, and amputation 62. Furthermore, 539 per 1000 patients will have a diabetes-related premature death. The average life expectancy for the diabetic population is 10.9 years (95%CI 10.7-11.2; this decreases to 8.3 years after adjusting for quality of life (CI95% 8.1-8.5. Male sex and cases diagnosed after age 40 have the highest risk for developing at least one major complication during the next 20 years. Conclusions Based on the current clinical profile of Mexican patients with diabetes, the burden of disease related complications will be tremendous over the next two decades.

  13. Estimating Relative Uncertainty of Radiative Transition Rates

    Directory of Open Access Journals (Sweden)

    Daniel E. Kelleher

    2014-11-01

    Full Text Available We consider a method to estimate relative uncertainties of radiative transition rates in an atomic spectrum. Few of these many transitions have had their rates determined by more than two reference-quality sources. One could estimate uncertainties for each transition, but analyses with only one degree of freedom are generally fraught with difficulties. We pursue a way to empirically combine the limited uncertainty information in each of the many transitions. We “pool” a dimensionless measure of relative dispersion, the “Coefficient of Variation of the mean,” \\(C_{V}^{n} \\equiv s/(\\bar{x}\\sqrt{n}\\. Here, for each transition rate, “s” is the standard deviation, and “\\(\\bar{x}\\” is the mean of “n” independent data sources. \\(C_{V}^{n}\\ is bounded by zero and one whenever the determined quantity is intrinsically positive. We scatter-plot the \\(C_{V}^{n} \\as a function of the “line strength” (here a more useful radiative transition rate than transition probability. We find a curve through comparable \\(C_{V}^{n} \\as that envelops a specified percentage of the \\(C_{V}^{n} \\s (e.g. 95%. We take this curve to represent the expanded relative uncertainty of the mean. The method is most advantageous when the number of determined transition rates is large while the number of independent determinations per transition is small. The transition rate data of Na III serves as an example.

  14. A Bigraph Relational Model

    DEFF Research Database (Denmark)

    Beauquier, Maxime; Schürmann, Carsten

    2011-01-01

    In this paper, we present a model based on relations for bigraphical reactive system [Milner09]. Its defining characteristics are that validity and reaction relations are captured as traces in a multi-set rewriting system. The relational model is derived from Milner's graphical definition...

  15. Identification and Quantification of Uncertainties Related to Using Distributed X-band Radar Estimated Precipitation as input in Urban Drainage Models

    DEFF Research Database (Denmark)

    Pedersen, Lisbeth

    the rainfall, but the energy reflected from the raindrops in the atmosphere. As result a calibration from reflectivity to rainfall intensities is required. This thesis focuses on identifying and estimating uncertainties related to LAWR rainfall estimates. In this connection the calibration procedure is a key...... and possible improvements suggested. The LAWR is designed to provide rainfall data, especially for urban drainage applications, and as part of the thesis the integration of LAWR data into the DHI software application MIKE URBAN has been analyzed. The work has resulted in identification of scaling issues...

  16. Models as Relational Categories

    Science.gov (United States)

    Kokkonen, Tommi

    2017-11-01

    Model-based learning (MBL) has an established position within science education. It has been found to enhance conceptual understanding and provide a way for engaging students in authentic scientific activity. Despite ample research, few studies have examined the cognitive processes regarding learning scientific concepts within MBL. On the other hand, recent research within cognitive science has examined the learning of so-called relational categories. Relational categories are categories whose membership is determined on the basis of the common relational structure. In this theoretical paper, I argue that viewing models as relational categories provides a well-motivated cognitive basis for MBL. I discuss the different roles of models and modeling within MBL (using ready-made models, constructive modeling, and generative modeling) and discern the related cognitive aspects brought forward by the reinterpretation of models as relational categories. I will argue that relational knowledge is vital in learning novel models and in the transfer of learning. Moreover, relational knowledge underlies the coherent, hierarchical knowledge of experts. Lastly, I will examine how the format of external representations may affect the learning of models and the relevant relations. The nature of the learning mechanisms underlying students' mental representations of models is an interesting open question to be examined. Furthermore, the ways in which the expert-like knowledge develops and how to best support it is in need of more research. The discussion and conceptualization of models as relational categories allows discerning students' mental representations of models in terms of evolving relational structures in greater detail than previously done.

  17. Discrete Choice Models - Estimation of Passenger Traffic

    DEFF Research Database (Denmark)

    Sørensen, Majken Vildrik

    2003-01-01

    model, data and estimation are described, with a focus of possibilities/limitations of different techniques. Two special issues of modelling are addressed in further detail, namely data segmentation and estimation of Mixed Logit models. Both issues are concerned with whether individuals can be assumed...... for estimation of choice models). For application of the method an algorithm is provided with a case. Also for the second issue, estimation of Mixed Logit models, a method was proposed. The most commonly used approach to estimate Mixed Logit models, is to employ the Maximum Simulated Likelihood estimation (MSL...... distribution of coefficients were found. All the shapes of distributions found, complied with sound knowledge in terms of which should be uni-modal, sign specific and/or skewed distributions....

  18. Nonparametric estimation in models for unobservable heterogeneity

    OpenAIRE

    Hohmann, Daniel

    2014-01-01

    Nonparametric models which allow for data with unobservable heterogeneity are studied. The first publication introduces new estimators and their asymptotic properties for conditional mixture models. The second publication considers estimation of a function from noisy observations of its Radon transform in a Gaussian white noise model.

  19. MCMC estimation of multidimensional IRT models

    NARCIS (Netherlands)

    Beguin, Anton; Glas, Cornelis A.W.

    1998-01-01

    A Bayesian procedure to estimate the three-parameter normal ogive model and a generalization to a model with multidimensional ability parameters are discussed. The procedure is a generalization of a procedure by J. Albert (1992) for estimating the two-parameter normal ogive model. The procedure will

  20. Modeling the movement and equilibrium of water in the body of ruminants in relation to estimating body composition by deuterium oxide dilution

    International Nuclear Information System (INIS)

    Arnold, R.N.

    1986-01-01

    Deuterium oxide (D 2 O) dilution was evaluated for use in estimating body composition of ruminants. Empty body composition of cattle could not be accurately estimated by two- or three-compartment models when solved on the basis of clearance of D 2 O from blood. A 29-compartment blood-flow model was developed from measured blood flow rates and water volumes of tissues of sheep. The rates of equilibration of water in tissues that were simulated by the blood-flow model were much faster than actual rates measured in sheep and cattle. The incorporation of diffusion hindrances for movement of water into tissues enabled the blood flow model to simulate the measured equilibration rates in tissues, but the values of the diffusion coefficients were different for each tissue. The D 2 O-disappearance curve for blood simulated by the blood-flow model with diffusion limitations was comprised for four exponential components. The tissues and gastrointestinal tract contents were placed into five groups based upon the rate of equilibration. Water in the organs of the body equilibrated with water in blood within 3 min. Water in visceral fat, head, and some of the gastrointestinal tract tissues equilibrated within 8 to 16 min. Water in skeletal muscle, fat, and bone and the contents of some segments of the gastrointestinal tract equilibrated within 30 to 36 min. Water in the tissues and contents of the cecum and upper-large intestine equilibrated within 160 to 200 min. Water in ruminal tissue and contents equilibrated within 480 min

  1. Parameter Estimation of a Reliability Model of Demand-Caused and Standby-Related Failures of Safety Components Exposed to Degradation by Demand Stress and Ageing That Undergo Imperfect Maintenance

    Directory of Open Access Journals (Sweden)

    S. Martorell

    2017-01-01

    Full Text Available One can find many reliability, availability, and maintainability (RAM models proposed in the literature. However, such models become more complex day after day, as there is an attempt to capture equipment performance in a more realistic way, such as, explicitly addressing the effect of component ageing and degradation, surveillance activities, and corrective and preventive maintenance policies. Then, there is a need to fit the best model to real data by estimating the model parameters using an appropriate tool. This problem is not easy to solve in some cases since the number of parameters is large and the available data is scarce. This paper considers two main failure models commonly adopted to represent the probability of failure on demand (PFD of safety equipment: (1 by demand-caused and (2 standby-related failures. It proposes a maximum likelihood estimation (MLE approach for parameter estimation of a reliability model of demand-caused and standby-related failures of safety components exposed to degradation by demand stress and ageing that undergo imperfect maintenance. The case study considers real failure, test, and maintenance data for a typical motor-operated valve in a nuclear power plant. The results of the parameters estimation and the adoption of the best model are discussed.

  2. Estimating a Noncompensatory IRT Model Using Metropolis within Gibbs Sampling

    Science.gov (United States)

    Babcock, Ben

    2011-01-01

    Relatively little research has been conducted with the noncompensatory class of multidimensional item response theory (MIRT) models. A Monte Carlo simulation study was conducted exploring the estimation of a two-parameter noncompensatory item response theory (IRT) model. The estimation method used was a Metropolis-Hastings within Gibbs algorithm…

  3. Improved diagnostic model for estimating wind energy

    Energy Technology Data Exchange (ETDEWEB)

    Endlich, R.M.; Lee, J.D.

    1983-03-01

    Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.

  4. On parameter estimation in deformable models

    DEFF Research Database (Denmark)

    Fisker, Rune; Carstensen, Jens Michael

    1998-01-01

    Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian form...

  5. On the estimation of heat-intensity and heat-duration effects in time series models of temperature-related mortality in Stockholm, Sweden

    Science.gov (United States)

    2012-01-01

    Background We examine the effect of heat waves on mortality, over and above what would be predicted on the basis of temperature alone. Methods Present modeling approaches may not fully capture extra effects relating to heat wave duration, possibly because the mechanisms of action and the population at risk are different under more extreme conditions. Modeling such extra effects can be achieved using the commonly left-out effect-modification between the lags of temperature in distributed lag models. Results Using data from Stockholm, Sweden, and a variety of modeling approaches, we found that heat wave effects amount to a stable and statistically significant 8.1-11.6% increase in excess deaths per heat wave day. The effects explicitly relating to heat wave duration (2.0–3.9% excess deaths per day) were more sensitive to the degrees of freedom allowed for in the overall temperature-mortality relationship. However, allowing for a very large number of degrees of freedom indicated over-fitting the overall temperature-mortality relationship. Conclusions Modeling additional heat wave effects, e.g. between lag effect-modification, can give a better description of the effects from extreme temperatures, particularly in the non-elderly population. We speculate that it is biologically plausible to differentiate effects from heat and heat wave duration. PMID:22490779

  6. Modeling and estimating system availability

    International Nuclear Information System (INIS)

    Gaver, D.P.; Chu, B.B.

    1976-11-01

    Mathematical models to infer the availability of various types of more or less complicated systems are described. The analyses presented are probabilistic in nature and consist of three parts: a presentation of various analytic models for availability; a means of deriving approximate probability limits on system availability; and a means of statistical inference of system availability from sparse data, using a jackknife procedure. Various low-order redundant systems are used as examples, but extension to more complex systems is not difficult

  7. Parameter estimation in fractional diffusion models

    CERN Document Server

    Kubilius, Kęstutis; Ralchenko, Kostiantyn

    2017-01-01

    This book is devoted to parameter estimation in diffusion models involving fractional Brownian motion and related processes. For many years now, standard Brownian motion has been (and still remains) a popular model of randomness used to investigate processes in the natural sciences, financial markets, and the economy. The substantial limitation in the use of stochastic diffusion models with Brownian motion is due to the fact that the motion has independent increments, and, therefore, the random noise it generates is “white,” i.e., uncorrelated. However, many processes in the natural sciences, computer networks and financial markets have long-term or short-term dependences, i.e., the correlations of random noise in these processes are non-zero, and slowly or rapidly decrease with time. In particular, models of financial markets demonstrate various kinds of memory and usually this memory is modeled by fractional Brownian diffusion. Therefore, the book constructs diffusion models with memory and provides s...

  8. Method-related estimates of sperm vitality.

    Science.gov (United States)

    Cooper, Trevor G; Hellenkemper, Barbara

    2009-01-01

    Comparison of methods that estimate viability of human spermatozoa by monitoring head membrane permeability revealed that wet preparations (whether using positive or negative phase-contrast microscopy) generated significantly higher percentages of nonviable cells than did air-dried eosin-nigrosin smears. Only with the latter method did the sum of motile (presumed live) and stained (presumed dead) preparations never exceed 100%, making this the method of choice for sperm viability estimates.

  9. A Gaussian IV estimator of cointegrating relations

    DEFF Research Database (Denmark)

    Bårdsen, Gunnar; Haldrup, Niels

    2006-01-01

    -nonparametricestimators. Theoretically ideal instruments can be defined to ensure a limitingGaussian distribution of IV estimators, but unfortunately such instruments areunlikely to be found in real data. In the present paper we suggest an IV estimatorwhere the Hodrick-Prescott filtered trends are used as instruments forthe regressors...... in cointegrating regressions. These instruments are almost idealand simulations show that the IV estimator using such instruments alleviatethe endogeneity problem extremely well in both finite and large samples....

  10. Estimating relative risks for common outcome using PROC NLP.

    Science.gov (United States)

    Yu, Binbing; Wang, Zhuoqiao

    2008-05-01

    In cross-sectional or cohort studies with binary outcomes, it is biologically interpretable and of interest to estimate the relative risk or prevalence ratio, especially when the response rates are not rare. Several methods have been used to estimate the relative risk, among which the log-binomial models yield the maximum likelihood estimate (MLE) of the parameters. Because of restrictions on the parameter space, the log-binomial models often run into convergence problems. Some remedies, e.g., the Poisson and Cox regressions, have been proposed. However, these methods may give out-of-bound predicted response probabilities. In this paper, a new computation method using the SAS Nonlinear Programming (NLP) procedure is proposed to find the MLEs. The proposed NLP method was compared to the COPY method, a modified method to fit the log-binomial model. Issues in the implementation are discussed. For illustration, both methods were applied to data on the prevalence of microalbuminuria (micro-protein leakage into urine) for kidney disease patients from the Diabetes Control and Complications Trial. The sample SAS macro for calculating relative risk is provided in the appendix.

  11. Modelling dense relational data

    DEFF Research Database (Denmark)

    Herlau, Tue; Mørup, Morten; Schmidt, Mikkel Nørgaard

    2012-01-01

    Relational modelling classically consider sparse and discrete data. Measures of influence computed pairwise between temporal sources naturally give rise to dense continuous-valued matrices, for instance p-values from Granger causality. Due to asymmetry or lack of positive definiteness they are no......Relational modelling classically consider sparse and discrete data. Measures of influence computed pairwise between temporal sources naturally give rise to dense continuous-valued matrices, for instance p-values from Granger causality. Due to asymmetry or lack of positive definiteness...... they are not naturally suited for kernel K-means. We propose a generative Bayesian model for dense matrices which generalize kernel K-means to consider off-diagonal interactions in matrices of interactions, and demonstrate its ability to detect structure on both artificial data and two real data sets....

  12. Amplitude Models for Discrimination and Yield Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, William Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-01

    This seminar presentation describes amplitude models and yield estimations that look at the data in order to inform legislation. The following points were brought forth in the summary: global models that will predict three-component amplitudes (R-T-Z) were produced; Q models match regional geology; corrected source spectra can be used for discrimination and yield estimation; three-component data increase coverage and reduce scatter in source spectral estimates; three-component efforts must include distance-dependent effects; a community effort on instrument calibration is needed.

  13. A new relation to estimate nuclear radius

    International Nuclear Information System (INIS)

    Singh, M.; Kumar, Pradeep; Singh, Y.; Gupta, K.K.; Varshney, A.K.; Gupta, D.K.

    2013-01-01

    The uncertainty found in Grodzins semi empirical relation may be due to the non - consideration of asymmetry in the relation. In the present work we propose a new relation connecting B(E2; 2 1 + → 0 1 + ) and E2 1 + with asymmetric parameter γ

  14. Lithosphere and upper-mantle structure of the southern Baltic Sea estimated from modelling relative sea-level data with glacial isostatic adjustment

    Science.gov (United States)

    Steffen, H.; Kaufmann, G.; Lampe, R.

    2014-06-01

    During the last glacial maximum, a large ice sheet covered Scandinavia, which depressed the earth's surface by several 100 m. In northern central Europe, mass redistribution in the upper mantle led to the development of a peripheral bulge. It has been subsiding since the begin of deglaciation due to the viscoelastic behaviour of the mantle. We analyse relative sea-level (RSL) data of southern Sweden, Denmark, Germany, Poland and Lithuania to determine the lithospheric thickness and radial mantle viscosity structure for distinct regional RSL subsets. We load a 1-D Maxwell-viscoelastic earth model with a global ice-load history model of the last glaciation. We test two commonly used ice histories, RSES from the Australian National University and ICE-5G from the University of Toronto. Our results indicate that the lithospheric thickness varies, depending on the ice model used, between 60 and 160 km. The lowest values are found in the Oslo Graben area and the western German Baltic Sea coast. In between, thickness increases by at least 30 km tracing the Ringkøbing-Fyn High. In Poland and Lithuania, lithospheric thickness reaches up to 160 km. However, the latter values are not well constrained as the confidence regions are large. Upper-mantle viscosity is found to bracket [2-7] × 1020 Pa s when using ICE-5G. Employing RSES much higher values of 2 × 1021 Pa s are obtained for the southern Baltic Sea. Further investigations should evaluate whether this ice-model version and/or the RSL data need revision. We confirm that the lower-mantle viscosity in Fennoscandia can only be poorly resolved. The lithospheric structure inferred from RSES partly supports structural features of regional and global lithosphere models based on thermal or seismological data. While there is agreement in eastern Europe and southwest Sweden, the structure in an area from south of Norway to northern Germany shows large discrepancies for two of the tested lithosphere models. The lithospheric

  15. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...

  16. Estimation in autoregressive models with Markov regime

    OpenAIRE

    Ríos, Ricardo; Rodríguez, Luis

    2005-01-01

    In this paper we derive the consistency of the penalized likelihood method for the number state of the hidden Markov chain in autoregressive models with Markov regimen. Using a SAEM type algorithm to estimate the models parameters. We test the null hypothesis of hidden Markov Model against an autoregressive process with Markov regime.

  17. Semi-parametric estimation for ARCH models

    Directory of Open Access Journals (Sweden)

    Raed Alzghool

    2018-03-01

    Full Text Available In this paper, we conduct semi-parametric estimation for autoregressive conditional heteroscedasticity (ARCH model with Quasi likelihood (QL and Asymptotic Quasi-likelihood (AQL estimation methods. The QL approach relaxes the distributional assumptions of ARCH processes. The AQL technique is obtained from the QL method when the process conditional variance is unknown. We present an application of the methods to a daily exchange rate series. Keywords: ARCH model, Quasi likelihood (QL, Asymptotic Quasi-likelihood (AQL, Martingale difference, Kernel estimator

  18. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei

    2013-09-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  19. FUZZY MODELING BY SUCCESSIVE ESTIMATION OF RULES ...

    African Journals Online (AJOL)

    This paper presents an algorithm for automatically deriving fuzzy rules directly from a set of input-output data of a process for the purpose of modeling. The rules are extracted by a method termed successive estimation. This method is used to generate a model without truncating the number of fired rules, to within user ...

  20. Modelling and parameter estimation of dynamic systems

    CERN Document Server

    Raol, JR; Singh, J

    2004-01-01

    Parameter estimation is the process of using observations from a system to develop mathematical models that adequately represent the system dynamics. The assumed model consists of a finite set of parameters, the values of which are calculated using estimation techniques. Most of the techniques that exist are based on least-square minimization of error between the model response and actual system response. However, with the proliferation of high speed digital computers, elegant and innovative techniques like filter error method, H-infinity and Artificial Neural Networks are finding more and mor

  1. Vision System for Relative Motion Estimation from Optical Flow

    Directory of Open Access Journals (Sweden)

    Sergey M. Sokolov

    2010-08-01

    Full Text Available For the recent years there was an increasing interest in different methods of motion analysis based on visual data acquisition. Vision systems, intended to obtain quantitative data regarding motion in real time are especially in demand. This paper talks about the vision systems that allow the receipt of information on relative object motion in real time. It is shown, that the algorithms solving a wide range of practical problems by definition of relative movement can be generated on the basis of the known algorithms of an optical flow calculation. One of the system's goals is the creation of economically efficient intellectual sensor prototype in order to estimate relative objects motion based on optic flow. The results of the experiments with a prototype system model are shown.

  2. Estimation and uncertainty of reversible Markov models.

    Science.gov (United States)

    Trendelkamp-Schroer, Benjamin; Wu, Hao; Paul, Fabian; Noé, Frank

    2015-11-07

    Reversibility is a key concept in Markov models and master-equation models of molecular kinetics. The analysis and interpretation of the transition matrix encoding the kinetic properties of the model rely heavily on the reversibility property. The estimation of a reversible transition matrix from simulation data is, therefore, crucial to the successful application of the previously developed theory. In this work, we discuss methods for the maximum likelihood estimation of transition matrices from finite simulation data and present a new algorithm for the estimation if reversibility with respect to a given stationary vector is desired. We also develop new methods for the Bayesian posterior inference of reversible transition matrices with and without given stationary vector taking into account the need for a suitable prior distribution preserving the meta-stable features of the observed process during posterior inference. All algorithms here are implemented in the PyEMMA software--http://pyemma.org--as of version 2.0.

  3. Error estimation and adaptive chemical transport modeling

    Directory of Open Access Journals (Sweden)

    Malte Braack

    2014-09-01

    Full Text Available We present a numerical method to use several chemical transport models of increasing accuracy and complexity in an adaptive way. In largest parts of the domain, a simplified chemical model may be used, whereas in certain regions a more complex model is needed for accuracy reasons. A mathematically derived error estimator measures the modeling error and provides information where to use more accurate models. The error is measured in terms of output functionals. Therefore, one has to consider adjoint problems which carry sensitivity information. This concept is demonstrated by means of ozone formation and pollution emission.

  4. Groundwater temperature estimation and modeling using hydrogeophysics.

    Science.gov (United States)

    Nguyen, F.; Lesparre, N.; Hermans, T.; Dassargues, A.; Klepikova, M.; Kemna, A.; Caers, J.

    2017-12-01

    Groundwater temperature may be of use as a state variable proxy for aquifer heat storage, highlighting preferential flow paths, or contaminant remediation monitoring. However, its estimation often relies on scarce temperature data collected in boreholes. Hydrogeophysical methods such as electrical resistivity tomography (ERT) and distributed temperature sensing (DTS) may provide more exhaustive spatial information of the bulk properties of interest than samples from boreholes. If a properly calibrated DTS reading provides direct measurements of the groundwater temperature in the well, ERT requires one to determine the fractional change per degree Celsius. One advantage of this petrophysical relationship is its relative simplicity: the fractional change is often found to be around 0.02 per degree Celcius, and represents mainly the variation of electrical resistivity due to the viscosity effect. However, in presence of chemical and kinetics effects, the variation may also depend on the duration of the test and may neglect reactions occurring between the pore water and the solid matrix. Such effects are not expected to be important for low temperature systems (<30 °C), at least for short experiments. In this contribution, we use different field experiments under natural and forced flow conditions to review developments for the joint use of DTS and ERT to map and monitor the temperature distribution within aquifers, to characterize aquifers in terms of heterogeneity and to better understand processes. We show how temperature time-series measurements might be used to constraint the ERT inverse problem in space and time and how combined ERT-derived and DTS estimation of temperature may be used together with hydrogeological modeling to provide predictions of the groundwater temperature field.

  5. Procedures for parameter estimates of computational models for localized failure

    NARCIS (Netherlands)

    Iacono, C.

    2007-01-01

    In the last years, many computational models have been developed for tensile fracture in concrete. However, their reliability is related to the correct estimate of the model parameters, not all directly measurable during laboratory tests. Hence, the development of inverse procedures is needed, that

  6. Parameter and Uncertainty Estimation in Groundwater Modelling

    DEFF Research Database (Denmark)

    Jensen, Jacob Birk

    The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... was applied.Capture zone modelling was conducted on a synthetic stationary 3-dimensional flow problem involving river, surface and groundwater flow. Simulated capture zones were illustrated as likelihood maps and compared with a deterministic capture zones derived from a reference model. The results showed...

  7. Estimation of distribution overlap of urn models.

    Science.gov (United States)

    Hampton, Jerrad; Lladser, Manuel E

    2012-01-01

    A classical problem in statistics is estimating the expected coverage of a sample, which has had applications in gene expression, microbial ecology, optimization, and even numismatics. Here we consider a related extension of this problem to random samples of two discrete distributions. Specifically, we estimate what we call the dissimilarity probability of a sample, i.e., the probability of a draw from one distribution not being observed in [Formula: see text] draws from another distribution. We show our estimator of dissimilarity to be a [Formula: see text]-statistic and a uniformly minimum variance unbiased estimator of dissimilarity over the largest appropriate range of [Formula: see text]. Furthermore, despite the non-Markovian nature of our estimator when applied sequentially over [Formula: see text], we show it converges uniformly in probability to the dissimilarity parameter, and we present criteria when it is approximately normally distributed and admits a consistent jackknife estimator of its variance. As proof of concept, we analyze V35 16S rRNA data to discern between various microbial environments. Other potential applications concern any situation where dissimilarity of two discrete distributions may be of interest. For instance, in SELEX experiments, each urn could represent a random RNA pool and each draw a possible solution to a particular binding site problem over that pool. The dissimilarity of these pools is then related to the probability of finding binding site solutions in one pool that are absent in the other.

  8. Extreme gust wind estimation using mesoscale modeling

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Kruger, Andries

    2014-01-01

    Currently, the existing estimation of the extreme gust wind, e.g. the 50-year winds of 3 s values, in the IEC standard, is based on a statistical model to convert the 1:50-year wind values from the 10 min resolution. This statistical model assumes a Gaussian process that satisfies the classical...... through turbulent eddies. This process is modeled using the mesoscale Weather Forecasting and Research (WRF) model. The gust at the surface is calculated as the largest winds over a layer where the averaged turbulence kinetic energy is greater than the averaged buoyancy force. The experiments have been...

  9. Robust estimation procedure in panel data model

    Energy Technology Data Exchange (ETDEWEB)

    Shariff, Nurul Sima Mohamad [Faculty of Science of Technology, Universiti Sains Islam Malaysia (USIM), 71800, Nilai, Negeri Sembilan (Malaysia); Hamzah, Nor Aishah [Institute of Mathematical Sciences, Universiti Malaya, 50630, Kuala Lumpur (Malaysia)

    2014-06-19

    The panel data modeling has received a great attention in econometric research recently. This is due to the availability of data sources and the interest to study cross sections of individuals observed over time. However, the problems may arise in modeling the panel in the presence of cross sectional dependence and outliers. Even though there are few methods that take into consideration the presence of cross sectional dependence in the panel, the methods may provide inconsistent parameter estimates and inferences when outliers occur in the panel. As such, an alternative method that is robust to outliers and cross sectional dependence is introduced in this paper. The properties and construction of the confidence interval for the parameter estimates are also considered in this paper. The robustness of the procedure is investigated and comparisons are made to the existing method via simulation studies. Our results have shown that robust approach is able to produce an accurate and reliable parameter estimates under the condition considered.

  10. Model-Based Optimizing Control and Estimation Using Modelica Model

    Directory of Open Access Journals (Sweden)

    L. Imsland

    2010-07-01

    Full Text Available This paper reports on experiences from case studies in using Modelica/Dymola models interfaced to control and optimization software, as process models in real time process control applications. Possible applications of the integrated models are in state- and parameter estimation and nonlinear model predictive control. It was found that this approach is clearly possible, providing many advantages over modeling in low-level programming languages. However, some effort is required in making the Modelica models accessible to NMPC software.

  11. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from...... the two estimation methods without noise correction are studied. Second, a noise robust GMM estimator is constructed by approximating integrated volatility by a realized kernel instead of realized variance. The PBEFs are also recalculated in the noise setting, and the two estimation methods ability...

  12. NEW MODEL FOR SOLAR RADIATION ESTIMATION FROM ...

    African Journals Online (AJOL)

    Air temperature of monthly mean minimum temperature, maximum temperature and relative humidity obtained from Nigerian Meteorological Agency (NIMET) were used as inputs to the ANFIS model and monthly mean global solar radiation was used as out of the model. Statistical evaluation of the model was done based on ...

  13. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  14. Extreme Earthquake Risk Estimation by Hybrid Modeling

    Science.gov (United States)

    Chavez, M.; Cabrera, E.; Ashworth, M.; Garcia, S.; Emerson, D.; Perea, N.; Salazar, A.; Moulinec, C.

    2012-12-01

    The estimation of the hazard and the economical consequences i.e. the risk associated to the occurrence of extreme magnitude earthquakes in the neighborhood of urban or lifeline infrastructure, such as the 11 March 2011 Mw 9, Tohoku, Japan, represents a complex challenge as it involves the propagation of seismic waves in large volumes of the earth crust, from unusually large seismic source ruptures up to the infrastructure location. The large number of casualties and huge economic losses observed for those earthquakes, some of which have a frequency of occurrence of hundreds or thousands of years, calls for the development of new paradigms and methodologies in order to generate better estimates, both of the seismic hazard, as well as of its consequences, and if possible, to estimate the probability distributions of their ground intensities and of their economical impacts (direct and indirect losses), this in order to implement technological and economical policies to mitigate and reduce, as much as possible, the mentioned consequences. Herewith, we propose a hybrid modeling which uses 3D seismic wave propagation (3DWP) and neural network (NN) modeling in order to estimate the seismic risk of extreme earthquakes. The 3DWP modeling is achieved by using a 3D finite difference code run in the ~100 thousands cores Blue Gene Q supercomputer of the STFC Daresbury Laboratory of UK, combined with empirical Green function (EGF) techniques and NN algorithms. In particular the 3DWP is used to generate broadband samples of the 3D wave propagation of extreme earthquakes (plausible) scenarios corresponding to synthetic seismic sources and to enlarge those samples by using feed-forward NN. We present the results of the validation of the proposed hybrid modeling for Mw 8 subduction events, and show examples of its application for the estimation of the hazard and the economical consequences, for extreme Mw 8.5 subduction earthquake scenarios with seismic sources in the Mexican

  15. Decimative Spectral Estimation with Unconstrained Model Order

    Directory of Open Access Journals (Sweden)

    Stavroula-Evita Fotinea

    2012-01-01

    Full Text Available This paper presents a new state-space method for spectral estimation that performs decimation by any factor, it makes use of the full set of data and brings further apart the poles under consideration, while imposing almost no constraints to the size of the Hankel matrix (model order, as decimation increases. It is compared against two previously proposed techniques for spectral estimation (along with derived decimative versions, that lie among the most promising methods in the field of spectroscopy, where accuracy of parameter estimation is of utmost importance. Moreover, it is compared against a state-of-the-art purely decimative method proposed in literature. Experiments performed on simulated NMR signals prove the new method to be more robust, especially for low signal-to-noise ratio.

  16. Estimating Coastal Digital Elevation Model (DEM) Uncertainty

    Science.gov (United States)

    Amante, C.; Mesick, S.

    2017-12-01

    Integrated bathymetric-topographic digital elevation models (DEMs) are representations of the Earth's solid surface and are fundamental to the modeling of coastal processes, including tsunami, storm surge, and sea-level rise inundation. Deviations in elevation values from the actual seabed or land surface constitute errors in DEMs, which originate from numerous sources, including: (i) the source elevation measurements (e.g., multibeam sonar, lidar), (ii) the interpolative gridding technique (e.g., spline, kriging) used to estimate elevations in areas unconstrained by source measurements, and (iii) the datum transformation used to convert bathymetric and topographic data to common vertical reference systems. The magnitude and spatial distribution of the errors from these sources are typically unknown, and the lack of knowledge regarding these errors represents the vertical uncertainty in the DEM. The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) has developed DEMs for more than 200 coastal communities. This study presents a methodology developed at NOAA NCEI to derive accompanying uncertainty surfaces that estimate DEM errors at the individual cell-level. The development of high-resolution (1/9th arc-second), integrated bathymetric-topographic DEMs along the southwest coast of Florida serves as the case study for deriving uncertainty surfaces. The estimated uncertainty can then be propagated into the modeling of coastal processes that utilize DEMs. Incorporating the uncertainty produces more reliable modeling results, and in turn, better-informed coastal management decisions.

  17. Estimating the economic impact of a repository from scenario-based surveys: Models of the relation of stated intent to actual behavior

    International Nuclear Information System (INIS)

    Easterling, D.; Morwitz, V.; Kunreuther, H.

    1990-12-01

    The task of estimating the economic impact of a facility as novel and long-lived as a high-level nuclear waste (HLNW) repository is fraught with uncertainty. One approach to the forecasting problems is to survey economic agents as to how they would respond when confronted with hypothetical repository scenarios. A series of such studies conducted for the state of Nevada have examined the potential impact of a Yucca Mountain repository on behavior such as planning conventions, attending conventions, vacationing, outmigration, immigration, and business location. In each case, respondents drawn from a target population report on whether a particular repository event (either some form of an accident, or simply the presence of the facility) would cause them to act any differently than they otherwise would. The responses to such a survey provide an indication of whether or not economic behavior would be altered. However, the analysis is inevitably plagued with the question of how much credence to place in the reports of intended behavior; can we believe what people report they would do in a hypothetical situation? The present study examines a more precise version of this question regarding the validity of stated intent data. After reviewing a variety of literature in the area of intent versus actual behavior, we provide an answer to the question, ''What levels of actual behavior are consistent with the intent data that have been observed in the repository surveys?'' More formally, we assume that we are generally interested in predicting the proportion of a sample who will actually perform a target behavior. 86 refs., 6 figs., 9 tabs

  18. Consistent Estimation of Partition Markov Models

    Directory of Open Access Journals (Sweden)

    Jesús E. García

    2017-04-01

    Full Text Available The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions: what is the minimal number of parameters needed to specify a Markov chain and how to estimate these parameters. In order to answer these questions, we build a consistent strategy for model selection which consist of: giving a size n realization of the process, finding a model within the Partition Markov class, with a minimal number of parts to represent the process law. From the strategy, we derive a measure that establishes a metric in the state space. In addition, we show that if the law of the process is Markovian, then, eventually, when n goes to infinity, L will be retrieved. We show an application to model internet navigation patterns.

  19. Los Alamos Waste Management Cost Estimation Model

    International Nuclear Information System (INIS)

    Matysiak, L.M.; Burns, M.L.

    1994-03-01

    This final report completes the Los Alamos Waste Management Cost Estimation Project, and includes the documentation of the waste management processes at Los Alamos National Laboratory (LANL) for hazardous, mixed, low-level radioactive solid and transuranic waste, development of the cost estimation model and a user reference manual. The ultimate goal of this effort was to develop an estimate of the life cycle costs for the aforementioned waste types. The Cost Estimation Model is a tool that can be used to calculate the costs of waste management at LANL for the aforementioned waste types, under several different scenarios. Each waste category at LANL is managed in a separate fashion, according to Department of Energy requirements and state and federal regulations. The cost of the waste management process for each waste category has not previously been well documented. In particular, the costs associated with the handling, treatment and storage of the waste have not been well understood. It is anticipated that greater knowledge of these costs will encourage waste generators at the Laboratory to apply waste minimization techniques to current operations. Expected benefits of waste minimization are a reduction in waste volume, decrease in liability and lower waste management costs

  20. Estimating water equivalent snow depth from related meteorological variables

    International Nuclear Information System (INIS)

    Steyaert, L.T.; LeDuc, S.K.; Strommen, N.D.; Nicodemus, M.L.; Guttman, N.B.

    1980-05-01

    Engineering design must take into consideration natural loads and stresses caused by meteorological elements, such as, wind, snow, precipitation and temperature. The purpose of this study was to determine a relationship of water equivalent snow depth measurements to meteorological variables. Several predictor models were evaluated for use in estimating water equivalent values. These models include linear regression, principal component regression, and non-linear regression models. Linear, non-linear and Scandanavian models are used to generate annual water equivalent estimates for approximately 1100 cooperative data stations where predictor variables are available, but which have no water equivalent measurements. These estimates are used to develop probability estimates of snow load for each station. Map analyses for 3 probability levels are presented

  1. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  2. Conditional shape models for cardiac motion estimation

    DEFF Research Database (Denmark)

    Metz, Coert; Baka, Nora; Kirisli, Hortense

    2010-01-01

    We propose a conditional statistical shape model to predict patient specific cardiac motion from the 3D end-diastolic CTA scan. The model is built from 4D CTA sequences by combining atlas based segmentation and 4D registration. Cardiac motion estimation is, for example, relevant in the dynamic...... alignment of pre-operative CTA data with intra-operative X-ray imaging. Due to a trend towards prospective electrocardiogram gating techniques, 4D imaging data, from which motion information could be extracted, is not commonly available. The prediction of motion from shape information is thus relevant...

  3. Software Cost Estimating Models: A Comparative Study of What the Models Estimate

    Science.gov (United States)

    1993-09-01

    generate good cost estimates. One model developer best summed up this sentiment by stating: Estimation is not a mechanical process. Art, skill, and...Allocation Perc.uinta~es for SASEY Development Phases Sysieni Conce~pt 7.5% yseS/W Requ~irements Anlysis _________%__ S/W Raq;iirements Analysis 9.0

  4. Sparse Estimation Using Bayesian Hierarchical Prior Modeling for Real and Complex Linear Models

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Badiu, Mihai Alin

    2015-01-01

    -valued models, this paper proposes a GSM model - the Bessel K model - that induces concave penalty functions for the estimation of complex sparse signals. The properties of the Bessel K model are analyzed when it is applied to Type I and Type II estimation. This analysis reveals that, by tuning the parameters...... of the mixing pdf different penalty functions are invoked depending on the estimation type used, the value of the noise variance, and whether real or complex signals are estimated. Using the Bessel K model, we derive a sparse estimator based on a modification of the expectation-maximization algorithm formulated......In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-inducing priors that realize a class of concave penalty functions for the regression task in real-valued signal models. Motivated by the relative scarcity of formal tools for SBL in complex...

  5. The problem of multicollinearity in horizontal solar radiation estimation models and a new model for Turkey

    International Nuclear Information System (INIS)

    Demirhan, Haydar

    2014-01-01

    Highlights: • Impacts of multicollinearity on solar radiation estimation models are discussed. • Accuracy of existing empirical models for Turkey is evaluated. • A new non-linear model for the estimation of average daily horizontal global solar radiation is proposed. • Estimation and prediction performance of the proposed and existing models are compared. - Abstract: Due to the considerable decrease in energy resources and increasing energy demand, solar energy is an appealing field of investment and research. There are various modelling strategies and particular models for the estimation of the amount of solar radiation reaching at a particular point over the Earth. In this article, global solar radiation estimation models are taken into account. To emphasize severity of multicollinearity problem in solar radiation estimation models, some of the models developed for Turkey are revisited. It is observed that these models have been identified as accurate under certain multicollinearity structures, and when the multicollinearity is eliminated, the accuracy of these models is controversial. Thus, a reliable model that does not suffer from multicollinearity and gives precise estimates of global solar radiation for the whole region of Turkey is necessary. A new nonlinear model for the estimation of average daily horizontal solar radiation is proposed making use of the genetic programming technique. There is no multicollinearity problem in the new model, and its estimation accuracy is better than the revisited models in terms of numerous statistical performance measures. According to the proposed model, temperature, precipitation, altitude, longitude, and monthly average daily extraterrestrial horizontal solar radiation have significant effect on the average daily global horizontal solar radiation. Relative humidity and soil temperature are not included in the model due to their high correlation with precipitation and temperature, respectively. While altitude has

  6. Nonparametric model assisted model calibrated estimation in two ...

    African Journals Online (AJOL)

    Nonparametric model assisted model calibrated estimation in two stage survey sampling. RO Otieno, PN Mwita, PN Kihara. Abstract. No Abstract > East African Journal of Statistics Vol. 1 (3) 2007: pp.261-281. Full Text: EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT.

  7. A distributed approach for parameters estimation in System Biology models

    International Nuclear Information System (INIS)

    Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.

    2009-01-01

    Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.

  8. Multiple imputation for handling missing outcome data when estimating the relative risk.

    Science.gov (United States)

    Sullivan, Thomas R; Lee, Katherine J; Ryan, Philip; Salter, Amy B

    2017-09-06

    Multiple imputation is a popular approach to handling missing data in medical research, yet little is known about its applicability for estimating the relative risk. Standard methods for imputing incomplete binary outcomes involve logistic regression or an assumption of multivariate normality, whereas relative risks are typically estimated using log binomial models. It is unclear whether misspecification of the imputation model in this setting could lead to biased parameter estimates. Using simulated data, we evaluated the performance of multiple imputation for handling missing data prior to estimating adjusted relative risks from a correctly specified multivariable log binomial model. We considered an arbitrary pattern of missing data in both outcome and exposure variables, with missing data induced under missing at random mechanisms. Focusing on standard model-based methods of multiple imputation, missing data were imputed using multivariate normal imputation or fully conditional specification with a logistic imputation model for the outcome. Multivariate normal imputation performed poorly in the simulation study, consistently producing estimates of the relative risk that were biased towards the null. Despite outperforming multivariate normal imputation, fully conditional specification also produced somewhat biased estimates, with greater bias observed for higher outcome prevalences and larger relative risks. Deleting imputed outcomes from analysis datasets did not improve the performance of fully conditional specification. Both multivariate normal imputation and fully conditional specification produced biased estimates of the relative risk, presumably since both use a misspecified imputation model. Based on simulation results, we recommend researchers use fully conditional specification rather than multivariate normal imputation and retain imputed outcomes in the analysis when estimating relative risks. However fully conditional specification is not without its

  9. Estimating Drilling Cost and Duration Using Copulas Dependencies Models

    Directory of Open Access Journals (Sweden)

    M. Al Kindi

    2017-03-01

    Full Text Available Estimation of drilling budget and duration is a high-level challenge for oil and gas industry. This is due to the many uncertain activities in the drilling procedure such as material prices, overhead cost, inflation, oil prices, well type, and depth of drilling. Therefore, it is essential to consider all these uncertain variables and the nature of relationships between them. This eventually leads into the minimization of the level of uncertainty and yet makes a "good" estimation points for budget and duration given the well type. In this paper, the copula probability theory is used in order to model the dependencies between cost/duration and MRI (mechanical risk index. The MRI is a mathematical computation, which relates various drilling factors such as: water depth, measured depth, true vertical depth in addition to mud weight and horizontal displacement. In general, the value of MRI is utilized as an input for the drilling cost and duration estimations. Therefore, modeling the uncertain dependencies between MRI and both cost and duration using copulas is important. The cost and duration estimates for each well were extracted from the copula dependency model where research study simulate over 10,000 scenarios. These new estimates were later compared to the actual data in order to validate the performance of the procedure. Most of the wells show moderate - weak relationship of MRI dependence, which means that the variation in these wells can be related to MRI but to the extent that it is not the primary source.

  10. A single model procedure for estimating tank calibration equations

    International Nuclear Information System (INIS)

    Liebetrau, A.M.

    1997-10-01

    A fundamental component of any accountability system for nuclear materials is a tank calibration equation that relates the height of liquid in a tank to its volume. Tank volume calibration equations are typically determined from pairs of height and volume measurements taken in a series of calibration runs. After raw calibration data are standardized to a fixed set of reference conditions, the calibration equation is typically fit by dividing the data into several segments--corresponding to regions in the tank--and independently fitting the data for each segment. The estimates obtained for individual segments must then be combined to obtain an estimate of the entire calibration function. This process is tedious and time-consuming. Moreover, uncertainty estimates may be misleading because it is difficult to properly model run-to-run variability and between-segment correlation. In this paper, the authors describe a model whose parameters can be estimated simultaneously for all segments of the calibration data, thereby eliminating the need for segment-by-segment estimation. The essence of the proposed model is to define a suitable polynomial to fit to each segment and then extend its definition to the domain of the entire calibration function, so that it (the entire calibration function) can be expressed as the sum of these extended polynomials. The model provides defensible estimates of between-run variability and yields a proper treatment of between-segment correlations. A portable software package, called TANCS, has been developed to facilitate the acquisition, standardization, and analysis of tank calibration data. The TANCS package was used for the calculations in an example presented to illustrate the unified modeling approach described in this paper. With TANCS, a trial calibration function can be estimated and evaluated in a matter of minutes

  11. Robust estimation of hydrological model parameters

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-11-01

    Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.

  12. Estimated Perennial Streams of Idaho and Related Geospatial Datasets

    Science.gov (United States)

    Rea, Alan; Skinner, Kenneth D.

    2009-01-01

    The perennial or intermittent status of a stream has bearing on many regulatory requirements. Because of changing technologies over time, cartographic representation of perennial/intermittent status of streams on U.S. Geological Survey (USGS) topographic maps is not always accurate and (or) consistent from one map sheet to another. Idaho Administrative Code defines an intermittent stream as one having a 7-day, 2-year low flow (7Q2) less than 0.1 cubic feet per second. To establish consistency with the Idaho Administrative Code, the USGS developed regional regression equations for Idaho streams for several low-flow statistics, including 7Q2. Using these regression equations, the 7Q2 streamflow may be estimated for naturally flowing streams anywhere in Idaho to help determine perennial/intermittent status of streams. Using these equations in conjunction with a Geographic Information System (GIS) technique known as weighted flow accumulation allows for an automated and continuous estimation of 7Q2 streamflow at all points along a stream, which in turn can be used to determine if a stream is intermittent or perennial according to the Idaho Administrative Code operational definition. The selected regression equations were applied to create continuous grids of 7Q2 estimates for the eight low-flow regression regions of Idaho. By applying the 0.1 ft3/s criterion, the perennial streams have been estimated in each low-flow region. Uncertainty in the estimates is shown by identifying a 'transitional' zone, corresponding to flow estimates of 0.1 ft3/s plus and minus one standard error. Considerable additional uncertainty exists in the model of perennial streams presented in this report. The regression models provide overall estimates based on general trends within each regression region. These models do not include local factors such as a large spring or a losing reach that may greatly affect flows at any given point. Site-specific flow data, assuming a sufficient period of

  13. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    Science.gov (United States)

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Relative azimuth inversion by way of damped maximum correlation estimates

    Science.gov (United States)

    Ringler, A.T.; Edwards, J.D.; Hutt, C.R.; Shelly, F.

    2012-01-01

    Horizontal seismic data are utilized in a large number of Earth studies. Such work depends on the published orientations of the sensitive axes of seismic sensors relative to true North. These orientations can be estimated using a number of different techniques: SensOrLoc (Sensitivity, Orientation and Location), comparison to synthetics (Ekstrom and Busby, 2008), or by way of magnetic compass. Current methods for finding relative station azimuths are unable to do so with arbitrary precision quickly because of limitations in the algorithms (e.g. grid search methods). Furthermore, in order to determine instrument orientations during station visits, it is critical that any analysis software be easily run on a large number of different computer platforms and the results be obtained quickly while on site. We developed a new technique for estimating relative sensor azimuths by inverting for the orientation with the maximum correlation to a reference instrument, using a non-linear parameter estimation routine. By making use of overlapping windows, we are able to make multiple azimuth estimates, which helps to identify the confidence of our azimuth estimate, even when the signal-to-noise ratio (SNR) is low. Finally, our algorithm has been written as a stand-alone, platform independent, Java software package with a graphical user interface for reading and selecting data segments to be analyzed.

  15. Robust estimation of errors-in-variables models using M-estimators

    Science.gov (United States)

    Guo, Cuiping; Peng, Junhuan

    2017-07-01

    The traditional Errors-in-variables (EIV) models are widely adopted in applied sciences. The EIV model estimators, however, can be highly biased by gross error. This paper focuses on robust estimation in EIV models. A new class of robust estimators, called robust weighted total least squared estimators (RWTLS), is introduced. Robust estimators of the parameters of the EIV models are derived from M-estimators and Lagrange multiplier method. A simulated example is carried out to demonstrate the performance of the presented RWTLS. The result shows that the RWTLS algorithm can indeed resist gross error to achieve a reliable solution.

  16. AMEM-ADL Polymer Migration Estimation Model User's Guide

    Science.gov (United States)

    The user's guide of the Arthur D. Little Polymer Migration Estimation Model (AMEM) provides the information on how the model estimates the fraction of a chemical additive that diffuses through polymeric matrices.

  17. Benefit Estimation Model for Tourist Spaceflights

    Science.gov (United States)

    Goehlich, Robert A.

    2003-01-01

    It is believed that the only potential means for significant reduction of the recurrent launch cost, which results in a stimulation of human space colonization, is to make the launcher reusable, to increase its reliability, and to make it suitable for new markets such as mass space tourism. But such space projects, that have long range aspects are very difficult to finance, because even politicians would like to see a reasonable benefit during their term in office, because they want to be able to explain this investment to the taxpayer. This forces planners to use benefit models instead of intuitive judgement to convince sceptical decision-makers to support new investments in space. Benefit models provide insights into complex relationships and force a better definition of goals. A new approach is introduced in the paper that allows to estimate the benefits to be expected from a new space venture. The main objective why humans should explore space is determined in this study to ``improve the quality of life''. This main objective is broken down in sub objectives, which can be analysed with respect to different interest groups. Such interest groups are the operator of a space transportation system, the passenger, and the government. For example, the operator is strongly interested in profit, while the passenger is mainly interested in amusement, while the government is primarily interested in self-esteem and prestige. This leads to different individual satisfactory levels, which are usable for the optimisation process of reusable launch vehicles.

  18. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  19. Groundwater Modelling For Recharge Estimation Using Satellite Based Evapotranspiration

    Science.gov (United States)

    Soheili, Mahmoud; (Tom) Rientjes, T. H. M.; (Christiaan) van der Tol, C.

    2017-04-01

    Groundwater movement is influenced by several factors and processes in the hydrological cycle, from which, recharge is of high relevance. Since the amount of aquifer extractable water directly relates to the recharge amount, estimation of recharge is a perquisite of groundwater resources management. Recharge is highly affected by water loss mechanisms the major of which is actual evapotranspiration (ETa). It is, therefore, essential to have detailed assessment of ETa impact on groundwater recharge. The objective of this study was to evaluate how recharge was affected when satellite-based evapotranspiration was used instead of in-situ based ETa in the Salland area, the Netherlands. The Methodology for Interactive Planning for Water Management (MIPWA) model setup which includes a groundwater model for the northern part of the Netherlands was used for recharge estimation. The Surface Energy Balance Algorithm for Land (SEBAL) based actual evapotranspiration maps from Waterschap Groot Salland were also used. Comparison of SEBAL based ETa estimates with in-situ abased estimates in the Netherlands showed that these SEBAL estimates were not reliable. As such results could not serve for calibrating root zone parameters in the CAPSIM model. The annual cumulative ETa map produced by the model showed that the maximum amount of evapotranspiration occurs in mixed forest areas in the northeast and a portion of central parts. Estimates ranged from 579 mm to a minimum of 0 mm in the highest elevated areas with woody vegetation in the southeast of the region. Variations in mean seasonal hydraulic head and groundwater level for each layer showed that the hydraulic gradient follows elevation in the Salland area from southeast (maximum) to northwest (minimum) of the region which depicts the groundwater flow direction. The mean seasonal water balance in CAPSIM part was evaluated to represent recharge estimation in the first layer. The highest recharge estimated flux was for autumn

  20. Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation.

    Science.gov (United States)

    Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng; Wang, Meng

    2016-09-20

    A new algorithm called maximum correntropy unscented Kalman filter (MCUKF) is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF) provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC), the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT) is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.

  1. Remaining lifetime modeling using State-of-Health estimation

    Science.gov (United States)

    Beganovic, Nejra; Söffker, Dirk

    2017-08-01

    Technical systems and system's components undergo gradual degradation over time. Continuous degradation occurred in system is reflected in decreased system's reliability and unavoidably lead to a system failure. Therefore, continuous evaluation of State-of-Health (SoH) is inevitable to provide at least predefined lifetime of the system defined by manufacturer, or even better, to extend the lifetime given by manufacturer. However, precondition for lifetime extension is accurate estimation of SoH as well as the estimation and prediction of Remaining Useful Lifetime (RUL). For this purpose, lifetime models describing the relation between system/component degradation and consumed lifetime have to be established. In this contribution modeling and selection of suitable lifetime models from database based on current SoH conditions are discussed. Main contribution of this paper is the development of new modeling strategies capable to describe complex relations between measurable system variables, related system degradation, and RUL. Two approaches with accompanying advantages and disadvantages are introduced and compared. Both approaches are capable to model stochastic aging processes of a system by simultaneous adaption of RUL models to current SoH. The first approach requires a priori knowledge about aging processes in the system and accurate estimation of SoH. An estimation of SoH here is conditioned by tracking actual accumulated damage into the system, so that particular model parameters are defined according to a priori known assumptions about system's aging. Prediction accuracy in this case is highly dependent on accurate estimation of SoH but includes high number of degrees of freedom. The second approach in this contribution does not require a priori knowledge about system's aging as particular model parameters are defined in accordance to multi-objective optimization procedure. Prediction accuracy of this model does not highly depend on estimated SoH. This model

  2. Parameter estimation of component reliability models in PSA model of Krsko NPP

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Vrbanic, I.

    2001-01-01

    In the paper, the uncertainty analysis of component reliability models for independent failures is shown. The present approach for parameter estimation of component reliability models in NPP Krsko is presented. Mathematical approaches for different types of uncertainty analyses are introduced and used in accordance with some predisposed requirements. Results of the uncertainty analyses are shown in an example for time-related components. As the most appropriate uncertainty analysis proved the Bayesian estimation with the numerical estimation of a posterior, which can be approximated with some appropriate probability distribution, in this paper with lognormal distribution.(author)

  3. Estimating, budgets, and funds management as related to unused funds

    International Nuclear Information System (INIS)

    Hutterman, L.

    1994-01-01

    The Department of Energy Environmental Restoration Program each year has a large reserve of funds that is uncosted and unobligated (unused funds) each year. These are funds that congress has made available to the Departments Environmental Restoration Program and the Program has not used these funds to perform the scope assigned to the program. This paper raised the question: is this problem related to estimating, budgeting or funds management, and offers some tools to deal with this problem

  4. Dynamic Diffusion Estimation in Exponential Family Models

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Sečkárová, Vladimíra

    2013-01-01

    Roč. 20, č. 11 (2013), s. 1114-1117 ISSN 1070-9908 R&D Projects: GA MŠk 7D12004; GA ČR GA13-13502S Keywords : diffusion estimation * distributed estimation * paremeter estimation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.639, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/dedecius-0396518.pdf

  5. UAV State Estimation Modeling Techniques in AHRS

    Science.gov (United States)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  6. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2011-01-01

    In this paper, two non-parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel-based approaches. The second estimator

  7. Performances of some estimators of linear model with ...

    African Journals Online (AJOL)

    The estimators are compared by examing the finite properties of estimators namely; sum of biases, sum of absolute biases, sum of variances and sum of the mean squared error of the estimated parameter of the model. Results show that when the autocorrelation level is small (ρ=0.4), the MLGD estimator is best except when ...

  8. State-level estimates of cancer-related absenteeism costs.

    Science.gov (United States)

    Tangka, Florence K; Trogdon, Justin G; Nwaise, Isaac; Ekwueme, Donatus U; Guy, Gery P; Orenstein, Diane

    2013-09-01

    Cancer is one of the top five most costly diseases in the United States and leads to substantial work loss. Nevertheless, limited state-level estimates of cancer absenteeism costs have been published. In analyses of data from the 2004-2008 Medical Expenditure Panel Survey, the 2004 National Nursing Home Survey, the U.S. Census Bureau for 2008, and the 2009 Current Population Survey, we used regression modeling to estimate annual state-level absenteeism costs attributable to cancer from 2004 to 2008. We estimated that the state-level median number of days of absenteeism per year among employed cancer patients was 6.1 days and that annual state-level cancer absenteeism costs ranged from $14.9 million to $915.9 million (median = $115.9 million) across states in 2010 dollars. Absenteeism costs are approximately 6.5% of the costs of premature cancer mortality. The results from this study suggest that lost productivity attributable to cancer is a substantial cost to employees and employers and contributes to estimates of the overall impact of cancer in a state population.

  9. On population size estimators in the Poisson mixture model.

    Science.gov (United States)

    Mao, Chang Xuan; Yang, Nan; Zhong, Jinhua

    2013-09-01

    Estimating population sizes via capture-recapture experiments has enormous applications. The Poisson mixture model can be adopted for those applications with a single list in which individuals appear one or more times. We compare several nonparametric estimators, including the Chao estimator, the Zelterman estimator, two jackknife estimators and the bootstrap estimator. The target parameter of the Chao estimator is a lower bound of the population size. Those of the other four estimators are not lower bounds, and they may produce lower confidence limits for the population size with poor coverage probabilities. A simulation study is reported and two examples are investigated. © 2013, The International Biometric Society.

  10. A Derivative Based Estimator for Semiparametric Index Models

    NARCIS (Netherlands)

    Donkers, A.C.D.; Schafgans, M.

    2003-01-01

    This paper proposes a semiparametric estimator for single- and multiple index models.It provides an extension of the average derivative estimator to the multiple index model setting.The estimator uses the average of the outer product of derivatives and is shown to be root-N consistent and

  11. Estimation of Stochastic Volatility Models by Nonparametric Filtering

    DEFF Research Database (Denmark)

    Kanaya, Shin; Kristensen, Dennis

    2016-01-01

    /estimated volatility process replacing the latent process. Our estimation strategy is applicable to both parametric and nonparametric stochastic volatility models, and can handle both jumps and market microstructure noise. The resulting estimators of the stochastic volatility model will carry additional biases...

  12. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  13. Efficient estimation of semiparametric copula models for bivariate survival data

    KAUST Repository

    Cheng, Guang

    2014-01-01

    A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.

  14. A Maximum Likelihood Estimation of Vocal-Tract-Related Filter Characteristics for Single Channel Speech Separation

    Directory of Open Access Journals (Sweden)

    Dansereau Richard M

    2007-01-01

    Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.

  15. A Maximum Likelihood Estimation of Vocal-Tract-Related Filter Characteristics for Single Channel Speech Separation

    Directory of Open Access Journals (Sweden)

    Mohammad H. Radfar

    2006-11-01

    Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.

  16. Mathematical model of transmission network static state estimation

    Directory of Open Access Journals (Sweden)

    Ivanov Aleksandar

    2012-01-01

    Full Text Available In this paper the characteristics and capabilities of the power transmission network static state estimator are presented. The solving process of the mathematical model containing the measurement errors and their processing is developed. To evaluate difference between the general model of state estimation and the fast decoupled state estimation model, the both models are applied to an example, and so derived results are compared.

  17. Semiparametric Efficient Adaptive Estimation of the PTTGARCH model

    OpenAIRE

    Ciccarelli, Nicola

    2016-01-01

    Financial data sets exhibit conditional heteroskedasticity and asymmetric volatility. In this paper we derive a semiparametric efficient adaptive estimator of a conditional heteroskedasticity and asymmetric volatility GARCH-type model (i.e., the PTTGARCH(1,1) model). Via kernel density estimation of the unknown density function of the innovation and via the Newton-Raphson technique applied on the root-n-consistent quasi-maximum likelihood estimator, we construct a more efficient estimator tha...

  18. Challenges in Species Tree Estimation Under the Multispecies Coalescent Model.

    Science.gov (United States)

    Xu, Bo; Yang, Ziheng

    2016-12-01

    The multispecies coalescent (MSC) model has emerged as a powerful framework for inferring species phylogenies while accounting for ancestral polymorphism and gene tree-species tree conflict. A number of methods have been developed in the past few years to estimate the species tree under the MSC. The full likelihood methods (including maximum likelihood and Bayesian inference) average over the unknown gene trees and accommodate their uncertainties properly but involve intensive computation. The approximate or summary coalescent methods are computationally fast and are applicable to genomic datasets with thousands of loci, but do not make an efficient use of information in the multilocus data. Most of them take the two-step approach of reconstructing the gene trees for multiple loci by phylogenetic methods and then treating the estimated gene trees as observed data, without accounting for their uncertainties appropriately. In this article we review the statistical nature of the species tree estimation problem under the MSC, and explore the conceptual issues and challenges of species tree estimation by focusing mainly on simple cases of three or four closely related species. We use mathematical analysis and computer simulation to demonstrate that large differences in statistical performance may exist between the two classes of methods. We illustrate that several counterintuitive behaviors may occur with the summary methods but they are due to inefficient use of information in the data by summary methods and vanish when the data are analyzed using full-likelihood methods. These include (i) unidentifiability of parameters in the model, (ii) inconsistency in the so-called anomaly zone, (iii) singularity on the likelihood surface, and (iv) deterioration of performance upon addition of more data. We discuss the challenges and strategies of species tree inference for distantly related species when the molecular clock is violated, and highlight the need for improving the

  19. Analytical Estimation of Water-Oil Relative Permeabilities through Fractures

    Directory of Open Access Journals (Sweden)

    Saboorian-Jooybari Hadi

    2016-05-01

    Full Text Available Modeling multiphase flow through fractures is a key issue for understanding flow mechanism and performance prediction of fractured petroleum reservoirs, geothermal reservoirs, underground aquifers and carbon-dioxide sequestration. One of the most challenging subjects in modeling of fractured petroleum reservoirs is quantifying fluids competition for flow in fracture network (relative permeability curves. Unfortunately, there is no standard technique for experimental measurement of relative permeabilities through fractures and the existing methods are very expensive, time consuming and erroneous. Although, several formulations were presented to calculate fracture relative permeability curves in the form of linear and power functions of flowing fluids saturation, it is still unclear what form of relative permeability curves must be used for proper modeling of flow through fractures and consequently accurate reservoir simulation. Basically, the classic linear relative permeability (X-type curves are used in almost all of reservoir simulators. In this work, basic fluid flow equations are combined to develop a new simple analytical model for water-oil two phase flow in a single fracture. The model gives rise to simple analytic formulations for fracture relative permeabilities. The model explicitly proves that water-oil relative permeabilities in fracture network are functions of fluids saturation, viscosity ratio, fluids density, inclination of fracture plane from horizon, pressure gradient along fracture and rock matrix wettability, however they were considered to be only functions of saturations in the classic X-type and power (Corey [35] and Honarpour et al. [28, 29] models. Eventually, validity of the proposed formulations is checked against literature experimental data. The proposed fracture relative permeability functions have several advantages over the existing ones. Firstly, they are explicit functions of the parameters which are known for

  20. Cosmological models in general relativity

    Indian Academy of Sciences (India)

    Cosmological models in general relativity. B B PAUL. Department of Physics, Nowgong College, Nagaon, Assam, India. MS received 4 October 2002; revised 6 March 2003; accepted 21 May 2003. Abstract. LRS Bianchi type-I space-time filled with perfect fluid is considered here with deceler- ation parameter as variable.

  1. Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation

    Directory of Open Access Journals (Sweden)

    Xi Liu

    2016-09-01

    Full Text Available A new algorithm called maximum correntropy unscented Kalman filter (MCUKF is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC, the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.

  2. Volatility estimation using a rational GARCH model

    Directory of Open Access Journals (Sweden)

    Tetsuya Takaishi

    2018-03-01

    Full Text Available The rational GARCH (RGARCH model has been proposed as an alternative GARCHmodel that captures the asymmetric property of volatility. In addition to the previously proposedRGARCH model, we propose an alternative RGARCH model called the RGARCH-Exp model thatis more stable when dealing with outliers. We measure the performance of the volatility estimationby a loss function calculated using realized volatility as a proxy for true volatility and compare theRGARCH-type models with other asymmetric type models such as the EGARCH and GJR models.We conduct empirical studies of six stocks on the Tokyo Stock Exchange and find that a volatilityestimation using the RGARCH-type models outperforms the GARCH model and is comparable toother asymmetric GARCH models.

  3. Comparison of two intelligent models to estimate the instantaneous ...

    Indian Academy of Sciences (India)

    Mostafa Zamani Mohiabadi

    2017-07-25

    Jul 25, 2017 ... 2014) has combined empirical models and a Bayesian neural network (BNN) model to estimate daily global solar radiation on a horizon- tal surface in Ghardaıa, Algeria. In their model, the maximum and minimum air temperatures of the year 2006 have been used to estimate the coefficients of the empirical ...

  4. A Contingent Trip Model for Estimating Rail-trail Demand

    Science.gov (United States)

    Carter J. Betz; John C. Bergstrom; J. Michael Bowker

    2003-01-01

    The authors develop a contingent trip model to estimate the recreation demand for and value of a potential rail-trail site in north-east Georgia. The contingent trip model is an alternative to travel cost modelling useful for ex ante evaluation of proposed recreation resources or management alternatives. The authors estimate the empirical demand for trips using a...

  5. Performance of monitoring networks estimated from a Gaussian plume model

    International Nuclear Information System (INIS)

    Seebregts, A.J.; Hienen, J.F.A.

    1990-10-01

    In support of the ECN study on monitoring strategies after nuclear accidents, the present report describes the analysis of the performance of a monitoring network in a square grid. This network is used to estimate the distribution of the deposition pattern after a release of radioactivity into the atmosphere. The analysis is based upon a single release, a constant wind direction and an atmospheric dispersion according to a simplified Gaussian plume model. A technique is introduced to estimate the parameters in this Gaussian model based upon measurements at specific monitoring locations and linear regression, although this model is intrinsically non-linear. With these estimated parameters and the Gaussian model the distribution of the contamination due to deposition can be estimated. To investigate the relation between the network and the accuracy of the estimates for the deposition, deposition data have been generated by the Gaussian model, including a measurement error by a Monte Carlo simulation and this procedure has been repeated for several grid sizes, dispersion conditions, number of measurements per location, and errors per single measurement. The present technique has also been applied for the mesh sizes of two networks in the Netherlands, viz. the Landelijk Meetnet Radioaciviteit (National Measurement Network on Radioactivity, mesh size approx. 35 km) and the proposed Landelijk Meetnet Nucleaire Incidenten (National Measurement Network on Nuclear Incidents, mesh size approx. 15 km). The results show accuracies of 11 and 7 percent, respectively, if monitoring locations are used more than 10 km away from the postulated accident site. These figures are based upon 3 measurements per location and a dispersion during neutral weather with a wind velocity of 4 m/s. For stable weather conditions and low wind velocities, i.e. a small plume, the calculated accuracies are at least a factor 1.5 worse.The present type of analysis makes a cost-benefit approach to the

  6. Uncertainty related to Environmental Data and Estimated Extreme Events

    DEFF Research Database (Denmark)

    Burcharth, H. F.

    The design loads on rubble mound breakwaters are almost entirely determined by the environmental conditions, i.e. sea state, water levels, sea bed characteristics, etc. It is the objective of sub-group B to identify the most important environmental parameters and evaluate the related uncertainties...... including those corresponding to extreme estimates typically used for design purposes. Basically a design condition is made up of a set of parameter values stemming from several environmental parameters. To be able to evaluate the uncertainty related to design states one must know the corresponding joint...... probability density function (j.p.d.f.). However, the j.p.d.f.'s are in general site specific and very few examples exist where the necessary intensive investigations to establish such functions have been performed. Moreover, a general theoretical treatment of the problem is mathematically complicated...

  7. Using 7Be measurements to estimate the relative contributions of interrill and rill erosion

    Science.gov (United States)

    Zhang, Feng-Bao; Yang, Ming-Yi; Walling, Des E.; Zhang, Bo

    2014-02-01

    Rapid and reliable methods for estimating the relative contribution of interrill and rill erosion during a rainfall event are needed to provide an improved understanding of soil erosion processes and to develop process-based soil erosion prediction models. Use of the radionuclide 7Be in controlled experiments provides a means of addressing this need and this paper reports an experimental study aimed at refining and testing procedures employed to estimate the relative contribution of the two components of erosion. Four experimental plots (area 5 × 2 m and 10°, 15°, 20°, and 25° slope), filled with a loessial soil, manually tilled, and kept free of weeds with herbicides, were subjected to high intensity rainfall (91.8-120.6 mm h- 1), in order to induce rill development. The evolution of the rill network was documented photographically during the rainfall events and the runoff and sediment output from the plots were collected and measured. The sediment was recovered from the runoff and its mass and 7Be activity were measured. The Yang model, reported previously, was used to estimate the relative contributions of interrill and rill erosion from the 7Be activity of the exported sediment and this model was further refined to take account of the dynamic growth of the rills during the rainfall event. The results from the experiments were also used to develop a simple empirical linear model for estimating the relative contributions of interrill and rill erosion from the 7Be measurements. A comparison of the results provided by the three models showed some differences in the estimates of the magnitude of the relative contributions, although their trend during the event was similar. The estimates provided by the empirical linear model tended to be higher than those obtained using the refined model and lower than those generated by the Yang model, but were closer to those provided by the refined model which was seen as being theoretically the most accurate model. The

  8. Group-Contribution based Property Estimation and Uncertainty analysis for Flammability-related Properties

    DEFF Research Database (Denmark)

    Frutiger, Jerome; Marcarie, Camille; Abildskov, Jens

    2016-01-01

    This study presents new group contribution (GC) models for the prediction of Lower and Upper Flammability Limits (LFL and UFL), Flash Point (FP) and Auto Ignition Temperature (AIT) of organic chemicals applying the Marrero/Gani (MG) method. Advanced methods for parameter estimation using robust...... regression and outlier treatment have been applied to achieve high accuracy. Furthermore, linear error propagation based on covariance matrix of estimated parameters was performed. Therefore, every estimated property value of the flammability-related properties is reported together with its corresponding 95...

  9. A software for parameter estimation in dynamic models

    Directory of Open Access Journals (Sweden)

    M. Yuceer

    2008-12-01

    Full Text Available A common problem in dynamic systems is to determine parameters in an equation used to represent experimental data. The goal is to determine the values of model parameters that provide the best fit to measured data, generally based on some type of least squares or maximum likelihood criterion. In the most general case, this requires the solution of a nonlinear and frequently non-convex optimization problem. Some of the available software lack in generality, while others do not provide ease of use. A user-interactive parameter estimation software was needed for identifying kinetic parameters. In this work we developed an integration based optimization approach to provide a solution to such problems. For easy implementation of the technique, a parameter estimation software (PARES has been developed in MATLAB environment. When tested with extensive example problems from literature, the suggested approach is proven to provide good agreement between predicted and observed data within relatively less computing time and iterations.

  10. Irrigation Requirement Estimation Using Vegetation Indices and Inverse Biophysical Modeling

    Science.gov (United States)

    Bounoua, Lahouari; Imhoff, Marc L.; Franks, Shannon

    2010-01-01

    We explore an inverse biophysical modeling process forced by satellite and climatological data to quantify irrigation requirements in semi-arid agricultural areas. We constrain the carbon and water cycles modeled under both equilibrium, balance between vegetation and climate, and non-equilibrium, water added through irrigation. We postulate that the degree to which irrigated dry lands vary from equilibrium climate conditions is related to the amount of irrigation. The amount of water required over and above precipitation is considered as an irrigation requirement. For July, results show that spray irrigation resulted in an additional amount of water of 1.3 mm per occurrence with a frequency of 24.6 hours. In contrast, the drip irrigation required only 0.6 mm every 45.6 hours or 46% of that simulated by the spray irrigation. The modeled estimates account for 87% of the total reported irrigation water use, when soil salinity is not important and 66% in saline lands.

  11. Lag space estimation in time series modelling

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1997-01-01

    The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...

  12. Epistemology and Rosen's Modeling Relation

    International Nuclear Information System (INIS)

    Dress, W.B.

    1999-01-01

    Rosen's modeling relation is embedded in Popper's three worlds to provide an heuristic tool for model building and a guide for thinking about complex systems. The utility of this construct is demonstrated by suggesting a solution to the problem of pseudo science and a resolution of the famous Bohr-Einstein debates. A theory of bizarre systems is presented by an analogy with entangled particles of quantum mechanics. This theory underscores the poverty of present-day computational systems (e.g., computers) for creating complex and bizarre entities by distinguishing between mechanism and organism

  13. ESTIMATION OF INTRINSIC AND EXTRINSIC ENVIRONMENT FACTORS OF AGE-RELATED TOOTH COLOUR CHANGES

    Czech Academy of Sciences Publication Activity Database

    Hyšpler, P.; Jezbera, D.; Fürst, T.; Mikšík, Ivan; Waclawek, M.

    2010-01-01

    Roč. 17, č. 4 (2010), s. 515-525 ISSN 1898-6196 Institutional research plan: CEZ:AV0Z50110509 Keywords : age-related colour changes of teeth * intrinsic and extrinsic factors * 3D mathematical regression models * estimation of real age Subject RIV: ED - Physiology Impact factor: 0.294, year: 2010

  14. Estimation of storm runoff loads based on rainfall-related variables ...

    African Journals Online (AJOL)

    The comparative study indicated that, event loads are better estimated as power functions of storm-related independent variables. On the notion that rainfall data are more readily available, easy and less expensive to collect than runoff data, the calibrated model was verified using rainfall volume as independent variable.

  15. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2009-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  16. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2010-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  17. Probability density estimation in stochastic environmental models using reverse representations

    NARCIS (Netherlands)

    Van den Berg, E.; Heemink, A.W.; Lin, H.X.; Schoenmakers, J.G.M.

    2003-01-01

    The estimation of probability densities of variables described by systems of stochastic dierential equations has long been done using forward time estimators, which rely on the generation of realizations of the model, forward in time. Recently, an estimator based on the combination of forward and

  18. Performances Of Estimators Of Linear Models With Autocorrelated ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with Autocorrelated error terms are compared when the independent variable is autoregressive. The results reveal that the properties of the estimators when the sample size is finite is quite similar to the properties of the estimators when the sample size is infinite although ...

  19. Persuasion, Politeness and Relational Models

    Directory of Open Access Journals (Sweden)

    Jerzy Świątek

    2017-06-01

    Full Text Available Politeness Theory, just like Grice’s Cooperative Principle, points out that pragmatic analysis of language behaviour has to be grounded in extra-linguistic facts of social (or even biological nature. Additionally, despite the slightly misleading label, Politeness Theory provides a sound methodology to explain some persuasive as well as politeness phenomena. In the same vein, the so called Relational Model Theory provides another theoretical framework for the explanation of persuasive phenomena and persuasive language. Both Relational Model Theory and Politeness Theory show that persuasion is also to be understood as a rational response to not-so-rational social and biological needs. In the article an attempt is made to compare the two theories focusing on their explanatory power in reference to language choices aiming at enhancing the persuasive potential of a language message.

  20. Diffuse solar radiation estimation models for Turkey's big cities

    International Nuclear Information System (INIS)

    Ulgen, Koray; Hepbasli, Arif

    2009-01-01

    A reasonably accurate knowledge of the availability of the solar resource at any place is required by solar engineers, architects, agriculturists, and hydrologists in many applications of solar energy such as solar furnaces, concentrating collectors, and interior illumination of buildings. For this purpose, in the past, various empirical models (or correlations) have been developed in order to estimate the solar radiation around the world. This study deals with diffuse solar radiation estimation models along with statistical test methods used to statistically evaluate their performance. Models used to predict monthly average daily values of diffuse solar radiation are classified in four groups as follows: (i) From the diffuse fraction or cloudness index, function of the clearness index, (ii) From the diffuse fraction or cloudness index, function of the relative sunshine duration or sunshine fraction, (iii) From the diffuse coefficient, function of the clearness index, and (iv) From the diffuse coefficient, function of the relative sunshine duration or sunshine fraction. Empirical correlations are also developed to establish a relationship between the monthly average daily diffuse fraction or cloudness index (K d ) and monthly average daily diffuse coefficient (K dd ) with the monthly average daily clearness index (K T ) and monthly average daily sunshine fraction (S/S o ) for the three big cities by population in Turkey (Istanbul, Ankara and Izmir). Although the global solar radiation on a horizontal surface and sunshine duration has been measured by the Turkish State Meteorological Service (STMS) over all country since 1964, the diffuse solar radiation has not been measured. The eight new models for estimating the monthly average daily diffuse solar radiation on a horizontal surface in three big cites are validated, and thus, the most accurate model is selected for guiding future projects. The new models are then compared with the 32 models available in the

  1. TPmsm: Estimation of the Transition Probabilities in 3-State Models

    Directory of Open Access Journals (Sweden)

    Artur Araújo

    2014-12-01

    Full Text Available One major goal in clinical applications of multi-state models is the estimation of transition probabilities. The usual nonparametric estimator of the transition matrix for non-homogeneous Markov processes is the Aalen-Johansen estimator (Aalen and Johansen 1978. However, two problems may arise from using this estimator: first, its standard error may be large in heavy censored scenarios; second, the estimator may be inconsistent if the process is non-Markovian. The development of the R package TPmsm has been motivated by several recent contributions that account for these estimation problems. Estimation and statistical inference for transition probabilities can be performed using TPmsm. The TPmsm package provides seven different approaches to three-state illness-death modeling. In two of these approaches the transition probabilities are estimated conditionally on current or past covariate measures. Two real data examples are included for illustration of software usage.

  2. Modeling and Parameter Estimation of a Small Wind Generation System

    Directory of Open Access Journals (Sweden)

    Carlos A. Ramírez Gómez

    2013-11-01

    Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.

  3. A nonparametric mixture model for cure rate estimation.

    Science.gov (United States)

    Peng, Y; Dear, K B

    2000-03-01

    Nonparametric methods have attracted less attention than their parametric counterparts for cure rate analysis. In this paper, we study a general nonparametric mixture model. The proportional hazards assumption is employed in modeling the effect of covariates on the failure time of patients who are not cured. The EM algorithm, the marginal likelihood approach, and multiple imputations are employed to estimate parameters of interest in the model. This model extends models and improves estimation methods proposed by other researchers. It also extends Cox's proportional hazards regression model by allowing a proportion of event-free patients and investigating covariate effects on that proportion. The model and its estimation method are investigated by simulations. An application to breast cancer data, including comparisons with previous analyses using a parametric model and an existing nonparametric model by other researchers, confirms the conclusions from the parametric model but not those from the existing nonparametric model.

  4. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  5. A simulation of water pollution model parameter estimation

    Science.gov (United States)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  6. Estimates of Leaf Relative Water Content from Optical Polarization Measurements

    Science.gov (United States)

    Dahlgren, R. P.; Vanderbilt, V. C.; Daughtry, C. S. T.

    2017-12-01

    Remotely sensing the water status of plant canopies remains a long term goal of remote sensing research. Existing approaches to remotely sensing canopy water status, such as the Crop Water Stress Index (CWSI) and the Equivalent Water Thickness (EWT), have limitations. The CWSI, based upon remotely sensing canopy radiant temperature in the thermal infrared spectral region, does not work well in humid regions, requires estimates of the vapor pressure deficit near the canopy during the remote sensing over-flight and, once stomata close, provides little information regarding the canopy water status. The EWT is based upon the physics of water-light interaction in the 900-2000nm spectral region, not plant physiology. Our goal, development of a remote sensing technique for estimating plant water status based upon measurements in the VIS/NIR spectral region, would potentially provide remote sensing access to plant dehydration physiology - to the cellular photochemistry and structural changes associated with water deficits in leaves. In this research, we used optical, crossed polarization filters to measure the VIS/NIR light reflected from the leaf interior, R, as well as the leaf transmittance, T, for 78 corn (Zea mays) and soybean (Glycine max) leaves having relative water contents (RWC) between 0.60 and 0.98. Our results show that as RWC decreases R increases while T decreases. Our results tie R and T changes in the VIS/NIR to leaf physiological changes - linking the light scattered out of the drying leaf interior to its relative water content and to changes in leaf cellular structure and pigments. Our results suggest remotely sensing the physiological water status of a single leaf - and perhaps of a plant canopy - might be possible in the future.

  7. Optimal covariance selection for estimation using graphical models

    OpenAIRE

    Vichik, Sergey; Oshman, Yaakov

    2011-01-01

    We consider a problem encountered when trying to estimate a Gaussian random field using a distributed estimation approach based on Gaussian graphical models. Because of constraints imposed by estimation tools used in Gaussian graphical models, the a priori covariance of the random field is constrained to embed conditional independence constraints among a significant number of variables. The problem is, then: given the (unconstrained) a priori covariance of the random field, and the conditiona...

  8. Temporal rainfall estimation using input data reduction and model inversion

    Science.gov (United States)

    Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.

    2016-12-01

    Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a

  9. Estimated Frequency Domain Model Uncertainties used in Robust Controller Design

    DEFF Research Database (Denmark)

    Tøffner-Clausen, S.; Andersen, Palle; Stoustrup, Jakob

    1994-01-01

    This paper deals with the combination of system identification and robust controller design. Recent results on estimation of frequency domain model uncertainty are......This paper deals with the combination of system identification and robust controller design. Recent results on estimation of frequency domain model uncertainty are...

  10. Estimating Lead (Pb) Bioavailability In A Mouse Model

    Science.gov (United States)

    Children are exposed to Pb through ingestion of Pb-contaminated soil. Soil Pb bioavailability is estimated using animal models or with chemically defined in vitro assays that measure bioaccessibility. However, bioavailability estimates in a large animal model (e.g., swine) can be...

  11. ESTIMATION DU MODELE LINEAIRE GENERALISE ET APPLICATION

    Directory of Open Access Journals (Sweden)

    Malika CHIKHI

    2012-06-01

    Full Text Available Cet article présente  le modèle linéaire généralisé englobant les  techniques de modélisation telles que la régression linéaire, la régression logistique, la régression  log linéaire et la régression  de Poisson . On Commence par la présentation des modèles  des lois exponentielles pour ensuite estimer les paramètres du modèle par la méthode du maximum de vraisemblance. Par la suite on teste les coefficients du modèle pour voir leurs significations et leurs intervalles de confiances, en utilisant le test de Wald qui porte sur la signification  de la vraie valeur du paramètre  basé sur l'estimation de l'échantillon.

  12. Estimating a marriage matching model with spillover effects.

    Science.gov (United States)

    Choo, Eugene; Siow, Aloysius

    2006-08-01

    We use marriage matching functions to study how marital patterns change when population supplies change. Specifically, we use a behavioral marriage matching function with spillover effects to rationalize marriage and cohabitation behavior in contemporary Canada. The model can estimate a couple's systematic gains to marriage and cohabitation relative to remaining single. These gains are invariant to changes in population supplies. Instead, changes in population supplies redistribute these gains between a couple. Although the model is behavioral, it is nonparametric. It can fit any observed cross-sectional marriage matching distribution. We use the estimated model to quantify the impacts of gender differences in mortality rates and the baby boom on observed marital behavior in Canada. The higher mortality rate of men makes men scarcer than women. We show that the scarceness of men modestly reduced the welfare of women and increased the welfare of men in the marriage market. On the other hand, the baby boom increased older men's net gains to entering the marriage market and lowered middle-aged women's net gains.

  13. An Estimation of Construction and Demolition Debris in Seoul, Korea: Waste Amount, Type, and Estimating Model.

    Science.gov (United States)

    Seo, Seongwon; Hwang, Yongwoo

    1999-08-01

    Construction and demolition (C&D) debris is generated at the site of various construction activities. However, the amount of the debris is usually so large that it is necessary to estimate the amount of C&D debris as accurately as possible for effective waste management and control in urban areas. In this paper, an effective estimation method using a statistical model was proposed. The estimation process was composed of five steps: estimation of the life span of buildings; estimation of the floor area of buildings to be constructed and demolished; calculation of individual intensity units of C&D debris; and estimation of the future C&D debris production. This method was also applied in the city of Seoul as an actual case, and the estimated amount of C&D debris in Seoul in 2021 was approximately 24 million tons. Of this total amount, 98% was generated by demolition, and the main components of debris were concrete and brick.

  14. Static models, recursive estimators and the zero-variance approach

    KAUST Repository

    Rubino, Gerardo

    2016-01-07

    When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.

  15. Incremental parameter estimation of kinetic metabolic network models

    Directory of Open Access Journals (Sweden)

    Jia Gengjie

    2012-11-01

    Full Text Available Abstract Background An efficient and reliable parameter estimation method is essential for the creation of biological models using ordinary differential equation (ODE. Most of the existing estimation methods involve finding the global minimum of data fitting residuals over the entire parameter space simultaneously. Unfortunately, the associated computational requirement often becomes prohibitively high due to the large number of parameters and the lack of complete parameter identifiability (i.e. not all parameters can be uniquely identified. Results In this work, an incremental approach was applied to the parameter estimation of ODE models from concentration time profiles. Particularly, the method was developed to address a commonly encountered circumstance in the modeling of metabolic networks, where the number of metabolic fluxes (reaction rates exceeds that of metabolites (chemical species. Here, the minimization of model residuals was performed over a subset of the parameter space that is associated with the degrees of freedom in the dynamic flux estimation from the concentration time-slopes. The efficacy of this method was demonstrated using two generalized mass action (GMA models, where the method significantly outperformed single-step estimations. In addition, an extension of the estimation method to handle missing data is also presented. Conclusions The proposed incremental estimation method is able to tackle the issue on the lack of complete parameter identifiability and to significantly reduce the computational efforts in estimating model parameters, which will facilitate kinetic modeling of genome-scale cellular metabolism in the future.

  16. Estimation of some stochastic models used in reliability engineering

    International Nuclear Information System (INIS)

    Huovinen, T.

    1989-04-01

    The work aims to study the estimation of some stochastic models used in reliability engineering. In reliability engineering continuous probability distributions have been used as models for the lifetime of technical components. We consider here the following distributions: exponential, 2-mixture exponential, conditional exponential, Weibull, lognormal and gamma. Maximum likelihood method is used to estimate distributions from observed data which may be either complete or censored. We consider models based on homogeneous Poisson processes such as gamma-poisson and lognormal-poisson models for analysis of failure intensity. We study also a beta-binomial model for analysis of failure probability. The estimators of the parameters for three models are estimated by the matching moments method and in the case of gamma-poisson and beta-binomial models also by maximum likelihood method. A great deal of mathematical or statistical problems that arise in reliability engineering can be solved by utilizing point processes. Here we consider the statistical analysis of non-homogeneous Poisson processes to describe the failing phenomena of a set of components with a Weibull intensity function. We use the method of maximum likelihood to estimate the parameters of the Weibull model. A common cause failure can seriously reduce the reliability of a system. We consider a binomial failure rate (BFR) model as an application of the marked point processes for modelling common cause failure in a system. The parameters of the binomial failure rate model are estimated with the maximum likelihood method

  17. Report on estimated nuclear energy related cost for fiscal 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The report first describes major actions planned to be taken in Japan in fiscal 1991 in the field of nuclear energy utilization. Major activities to be made for comprehensive strengthening of safety assurance measures are described, focusing on improvement of nuclear energy related safety regulations, promotion of research for safety assurance, improvement and strengthening of disaster prevention measures, environmental radioactivity surveys, control of exposure of workers engaged in radioactivity related jobs, etc. The report then describes actions required for the establishment of a nuclear fuel cycle, focusing on the procurement of uranium resources, establishment of a uranium enrichment process, reprocessing of spent fuel, application of recovered uranium, etc. Other activities are required for the development of new type reactors, effective application of plutonium, development of basic techniques, international contributions, cooperation with the public. Then, the report summarizes estimated costs required for the activities to be performed by the Japan Atomic Energy Research Institute, Power Reactor and Nuclear Fuel Development Corporation, National Institute of Radiological Sciences, Institute of Physical and Chemical Research. (N.K.)

  18. [Application of DNDC model in estimating cropland nitrate leaching].

    Science.gov (United States)

    Li, Hu; Wang, Li-Gang; Qiu, Jian-Jun

    2009-07-01

    The leaching amount of soil water and nitrate from winter wheat field under typical planting system in Jinan City of Shandong Province was measured with lysimeter during the whole growth season in 2008, and the feasibility of applying DNDC model to estimate this leaching amount was tested by the obtained data. On the whole, DNDC model could better simulate the soil water movement in the crop field, with the accuracy being acceptable. However, there existed definite deviation in the simulation of nitrate leaching. The simulated value (18.35 kg N x hm(-2)) was 3.46 kg N x hm(-2) higher than the observed value (14.89 kg N x hm(-2)), with a relative error of about 20%, which suggested that some related parameters were required to be further modified. The sensitivity test of DNDC model showed that cropland nitrate leaching was easily to be affected by irrigation and fertilization. It was proved that the model had definite applicability in the study area.

  19. Ballistic model to estimate microsprinkler droplet distribution

    Directory of Open Access Journals (Sweden)

    Conceição Marco Antônio Fonseca

    2003-01-01

    Full Text Available Experimental determination of microsprinkler droplets is difficult and time-consuming. This determination, however, could be achieved using ballistic models. The present study aimed to compare simulated and measured values of microsprinkler droplet diameters. Experimental measurements were made using the flour method, and simulations using a ballistic model adopted by the SIRIAS computational software. Drop diameters quantified in the experiment varied between 0.30 mm and 1.30 mm, while the simulated between 0.28 mm and 1.06 mm. The greatest differences between simulated and measured values were registered at the highest radial distance from the emitter. The model presented a performance classified as excellent for simulating microsprinkler drop distribution.

  20. A Dynamic Travel Time Estimation Model Based on Connected Vehicles

    Directory of Open Access Journals (Sweden)

    Daxin Tian

    2015-01-01

    Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.

  1. Weibull Parameters Estimation Based on Physics of Failure Model

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... for degradation modeling and failure criteria determination. The time dependent accumulated damage is assumed linearly proportional to the time dependent degradation level. It is observed that the deterministic accumulated damage at the level of unity closely estimates the characteristic fatigue life of Weibull...

  2. Cokriging model for estimation of water table elevation

    International Nuclear Information System (INIS)

    Hoeksema, R.J.; Clapp, R.B.; Thomas, A.L.; Hunley, A.E.; Farrow, N.D.; Dearstone, K.C.

    1989-01-01

    In geological settings where the water table is a subdued replica of the ground surface, cokriging can be used to estimate the water table elevation at unsampled locations on the basis of values of water table elevation and ground surface elevation measured at wells and at points along flowing streams. The ground surface elevation at the estimation point must also be determined. In the proposed method, separate models are generated for the spatial variability of the water table and ground surface elevation and for the dependence between these variables. After the models have been validated, cokriging or minimum variance unbiased estimation is used to obtain the estimated water table elevations and their estimation variances. For the Pits and Trenches area (formerly a liquid radioactive waste disposal facility) near Oak Ridge National Laboratory, water table estimation along a linear section, both with and without the inclusion of ground surface elevation as a statistical predictor, illustrate the advantages of the cokriging model

  3. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  4. Comparison of Estimation Procedures for Multilevel AR(1 Models

    Directory of Open Access Journals (Sweden)

    Tanja eKrone

    2016-04-01

    Full Text Available To estimate a time series model for multiple individuals, a multilevel model may be used.In this paper we compare two estimation methods for the autocorrelation in Multilevel AR(1 models, namely Maximum Likelihood Estimation (MLE and Bayesian Markov Chain Monte Carlo.Furthermore, we examine the difference between modeling fixed and random individual parameters.To this end, we perform a simulation study with a fully crossed design, in which we vary the length of the time series (10 or 25, the number of individuals per sample (10 or 25, the mean of the autocorrelation (-0.6 to 0.6 inclusive, in steps of 0.3 and the standard deviation of the autocorrelation (0.25 or 0.40.We found that the random estimators of the population autocorrelation show less bias and higher power, compared to the fixed estimators. As expected, the random estimators profit strongly from a higher number of individuals, while this effect is small for the fixed estimators.The fixed estimators profit slightly more from a higher number of time points than the random estimators.When possible, random estimation is preferred to fixed estimation.The difference between MLE and Bayesian estimation is nearly negligible. The Bayesian estimation shows a smaller bias, but MLE shows a smaller variability (i.e., standard deviation of the parameter estimates.Finally, better results are found for a higher number of individuals and time points, and for a lower individual variability of the autocorrelation. The effect of the size of the autocorrelation differs between outcome measures.

  5. Linear Regression Models for Estimating True Subsurface ...

    Indian Academy of Sciences (India)

    47

    The objective is to minimize the processing time and computer memory required .... Survey. 65 time to acquire extra GPR or seismic data for large sites and picking the first arrival time. 66 to provide the needed datasets for the joint inversion are also .... The data utilized for the regression modelling was acquired from ground.

  6. Linear Regression Models for Estimating True Subsurface ...

    Indian Academy of Sciences (India)

    47

    of the processing time and memory space required to carry out the inversion with the. 29. SCLS algorithm. ... consumption of time and memory space for the iterative computations to converge at. 54 minimum data ..... colour scale and blanking as the observed true resistivity models, for visual assessment. 163. The accuracy ...

  7. Genetic Prediction Models and Heritability Estimates for Functional ...

    African Journals Online (AJOL)

    This paper discusses these methodologies and their advantages and disadvantages. Heritability estimates obtained from these models are also reviewed. Linear methodologies can model binary and actual longevity, while RR and TM methodologies model binary survival. PH procedures model the hazard function of a cow ...

  8. Estimating small area health-related characteristics of populations: a methodological review

    Directory of Open Access Journals (Sweden)

    Azizur Rahman

    2017-05-01

    Full Text Available Estimation of health-related characteristics at a fine local geographic level is vital for effective health promotion programmes, provision of better health services and population-specific health planning and management. Lack of a micro-dataset readily available for attributes of individuals at small areas negatively impacts the ability of local and national agencies to manage serious health issues and related risks in the community. A solution to this challenge would be to develop a method that simulates reliable small-area statistics. This paper provides a significant appraisal of the methodologies for estimating health-related characteristics of populations at geographical limited areas. Findings reveal that a range of methodologies are in use, which can be classified as three distinct set of approaches: i indirect standardisation and individual level modelling; ii multilevel statistical modelling; and iii micro-simulation modelling. Although each approach has its own strengths and weaknesses, it appears that microsimulation- based spatial models have significant robustness over the other methods and also represent a more precise means of estimating health-related population characteristics over small areas.

  9. Model estimation of energy flow in Oregon coastal seabird populations

    Science.gov (United States)

    Wiens, J.A.; Scott, J.M.

    1976-01-01

    A computer simulation model was used to explore the patterns and magnitudes of population density changes and population energy demands in Oregon populations of Sooty Shear-waters, Leach?s Storm-Petrels, Brandt?s Cormorants, and Common Murres. The species differ in seasonal distribution and abundance, with shearwaters attaining high densities during their migratory movements through Oregon waters, and murres exhibiting the greatest seasonal stability in population numbers. On a unit area basis, annual energy flow is greatest through murre and cormorant populations. However, because shearwaters occupy a larger area during their transit, they dominate the total energy flow through the four-species seabird ?community.?.....Consumption of various prey types is estimated by coupling model output of energy demands with information on dietary habits. This analysis suggests that murres annually consume nearly twice as many herring as any other prey and consume approximately equal quantities of anchovy, smelt, cod, and rockfish. Cormorants consume a relatively small quantity of bottom-dwelling fish, while stormpetrels take roughly equal quantities of euphausiids and hydrozoans. Anchovies account for 43% of the 62,506 metric tons of prey the four species are estimated to consume annually; 86% of this anchovy consumption is by shearwaters. The consumption of pelagic fishes by these four populations within the neritic zone may represent as much as 22% of the annual production of these fish.

  10. IMPROVEMENT OF MATHEMATICAL MODELS FOR ESTIMATION OF TRAIN DYNAMICS

    Directory of Open Access Journals (Sweden)

    L. V. Ursulyak

    2017-12-01

    Full Text Available Purpose. Using scientific publications the paper analyzes the mathematical models developed in Ukraine, CIS countries and abroad for theoretical studies of train dynamics and also shows the urgency of their further improvement. Methodology. Information base of the research was official full-text and abstract databases, scientific works of domestic and foreign scientists, professional periodicals, materials of scientific and practical conferences, methodological materials of ministries and departments. Analysis of publications on existing mathematical models used to solve a wide range of problems associated with the train dynamics study shows the expediency of their application. Findings. The results of these studies were used in: 1 design of new types of draft gears and air distributors; 2 development of methods for controlling the movement of conventional and connected trains; 3 creation of appropriate process flow diagrams; 4 development of energy-saving methods of train driving; 5 revision of the Construction Codes and Regulations (SNiP ΙΙ-39.76; 6 when selecting the parameters of the autonomous automatic control system, created in DNURT, for an auxiliary locomotive that is part of a connected train; 7 when creating computer simulators for the training of locomotive drivers; 8 assessment of the vehicle dynamic indices characterizing traffic safety. Scientists around the world conduct numerical experiments related to estimation of train dynamics using mathematical models that need to be constantly improved. Originality. The authors presented the main theoretical postulates that allowed them to develop the existing mathematical models for solving problems related to the train dynamics. The analysis of scientific articles published in Ukraine, CIS countries and abroad allows us to determine the most relevant areas of application of mathematical models. Practicalvalue. The practical value of the results obtained lies in the scientific validity

  11. Explicit estimating equations for semiparametric generalized linear latent variable models

    KAUST Repository

    Ma, Yanyuan

    2010-07-05

    We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.

  12. Model for estimating of population abundance using line transect sampling

    Science.gov (United States)

    Abdulraqeb Abdullah Saeed, Gamil; Muhammad, Noryanti; Zun Liang, Chuan; Yusoff, Wan Nur Syahidah Wan; Zuki Salleh, Mohd

    2017-09-01

    Today, many studies use the nonparametric methods for estimating objects abundance, for the simplicity, the parametric methods are widely used by biometricians. This paper is designed to present the proposed model for estimating of population abundance using line transect technique. The proposed model is appealing because it is strictly monotonically decreasing with perpendicular distance and it satisfies the shoulder conditions. The statistical properties and inference of the proposed model are discussed. In the presented detection function, theoretically, the proposed model is satisfied the line transect assumption, that leads us to study the performance of this model. We use this model as a reference for the future research of density estimation. In this paper we also study the assumption of the detection function and introduce the corresponding model in order to apply the simulation in future work.

  13. Battery Calendar Life Estimator Manual Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Jon P. Christophersen; Ira Bloom; Ed Thomas; Vince Battaglia

    2012-10-01

    The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.

  14. How to use COSMIC Functional Size in Effort Estimation Models?

    OpenAIRE

    Gencel, Cigdem

    2008-01-01

    Although Functional Size Measurement (FSM) methods have become widely used by the software organizations, the functional size based effort estimation still needs further investigation. Most of the studies on effort estimation consider total functional size of the software as the primary input to estimation models and they mostly focus on identifying the project parameters which might have a significant effect on the size-effort relationship. This study brings suggestions on how to use COSMIC ...

  15. Ambulatory estimation of relative foot positions by fusing ultrasound and inertial sensor data

    NARCIS (Netherlands)

    Weenk, D.; Roetenberg, D.; van Beijnum, Bernhard J.F.; Hermens, Hermanus J.; Veltink, Petrus H.

    2015-01-01

    Relative foot position estimation is important for rehabilitation, sports training and functional diagnostics. In this paper an extended Kalman filter fusing ultrasound range estimates and inertial sensors is described. With this filter several gait parameters can be estimated ambulatory. Step

  16. Joint nonparametric correction estimator for excess relative risk regression in survival analysis with exposure measurement error.

    Science.gov (United States)

    Wang, Ching-Yun; Cullings, Harry; Song, Xiao; Kopecky, Kenneth J

    2017-11-01

    Observational epidemiological studies often confront the problem of estimating exposure-disease relationships when the exposure is not measured exactly. In the paper, we investigate exposure measurement error in excess relative risk regression, which is a widely used model in radiation exposure effect research. In the study cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies a generalized version of the classical additive measurement error model, but it may or may not have repeated measurements. In addition, an instrumental variable is available for individuals in a subset of the whole cohort. We develop a nonparametric correction (NPC) estimator using data from the subcohort, and further propose a joint nonparametric correction (JNPC) estimator using all observed data to adjust for exposure measurement error. An optimal linear combination estimator of JNPC and NPC is further developed. The proposed estimators are nonparametric, which are consistent without imposing a covariate or error distribution, and are robust to heteroscedastic errors. Finite sample performance is examined via a simulation study. We apply the developed methods to data from the Radiation Effects Research Foundation, in which chromosome aberration is used to adjust for the effects of radiation dose measurement error on the estimation of radiation dose responses.

  17. Estimating Dynamic Equilibrium Models using Macro and Financial Data

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper; Posch, Olaf; van der Wel, Michel

    We show that including financial market data at daily frequency, along with macro series at standard lower frequency, facilitates statistical inference on structural parameters in dynamic equilibrium models. Our continuous-time formulation conveniently accounts for the difference in observation...... frequency. We suggest two approaches for the estimation of structural parameters. The first is a simple regression-based procedure for estimation of the reduced-form parameters of the model, combined with a minimum-distance method for identifying the structural parameters. The second approach uses...... martingale estimating functions to estimate the structural parameters directly through a non-linear optimization scheme. We illustrate both approaches by estimating the stochastic AK model with mean-reverting spot interest rates. We also provide Monte Carlo evidence on the small sample behavior...

  18. Information matrix estimation procedures for cognitive diagnostic models.

    Science.gov (United States)

    Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei

    2018-03-06

    Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.

  19. Parameter Estimation for a Class of Lifetime Models

    Directory of Open Access Journals (Sweden)

    Xinyang Ji

    2014-01-01

    Full Text Available Our purpose in this paper is to present a better method of parametric estimation for a bivariate nonlinear regression model, which takes the performance indicator of rubber aging as the dependent variable and time and temperature as the independent variables. We point out that the commonly used two-step method (TSM, which splits the model and estimate parameters separately, has limitation. Instead, we apply the Marquardt’s method (MM to implement parametric estimation directly for the model and compare these two methods of parametric estimation by random simulation. Our results show that MM has better effect of data fitting, more reasonable parametric estimates, and smaller prediction error compared with TSM.

  20. Consistent estimation of linear panel data models with measurement error

    NARCIS (Netherlands)

    Meijer, Erik; Spierdijk, Laura; Wansbeek, Thomas

    2017-01-01

    Measurement error causes a bias towards zero when estimating a panel data linear regression model. The panel data context offers various opportunities to derive instrumental variables allowing for consistent estimation. We consider three sources of moment conditions: (i) restrictions on the

  1. The Development of an Empirical Model for Estimation of the ...

    African Journals Online (AJOL)

    Nassiri P

    rate, daily water consumption, smoking habits, drugs that interfere with the thermoregulatory processes, and exposure to other harmful agents. Conclusions: Eventually, based on the criteria, a model for estimation of the workers' sensitivity to heat stress was presented for the first time, by which the sensitivity is estimated in ...

  2. Asymptotics for Estimating Equations in Hidden Markov Models

    DEFF Research Database (Denmark)

    Hansen, Jørgen Vinsløv; Jensen, Jens Ledet

    Results on asymptotic normality for the maximum likelihood estimate in hidden Markov models are extended in two directions. The stationarity assumption is relaxed, which allows for a covariate process influencing the hidden Markov process. Furthermore a class of estimating equations is considered...

  3. Performances of estimators of linear auto-correlated error model ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with autocorrelated disturbance terms are compared when the independent variable is exponential. The results reveal that for both small and large samples, the Ordinary Least Squares (OLS) compares favourably with the Generalized least Squares (GLS) estimators in ...

  4. Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model

    DEFF Research Database (Denmark)

    Åberg, Andreas; Widd, Anders; Abildskov, Jens

    2016-01-01

    A challenge during the development of models for simulation of the automotive Selective Catalytic Reduction catalyst is the parameter estimation of the kinetic parameters, which can be time consuming and problematic. The parameter estimation is often carried out on small-scale reactor tests...

  5. Estimation for the Multiple Factor Model When Data Are Missing.

    Science.gov (United States)

    Finkbeiner, Carl

    1979-01-01

    A maximum likelihood method of estimating the parameters of the multiple factor model when data are missing from the sample is presented. A Monte Carlo study compares the method with five heuristic methods of dealing with the problem. The present method shows some advantage in accuracy of estimation. (Author/CTM)

  6. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  7. Inverse Gaussian model for small area estimation via Gibbs sampling

    African Journals Online (AJOL)

    We present a Bayesian method for estimating small area parameters under an inverse Gaussian model. The method is extended to estimate small area parameters for finite populations. The Gibbs sampler is proposed as a mechanism for implementing the Bayesian paradigm. We illustrate the method by application to ...

  8. Person Appearance Modeling and Orientation Estimation using Spherical Harmonics

    NARCIS (Netherlands)

    Liem, M.C.; Gavrila, D.M.

    2013-01-01

    We present a novel approach for the joint estimation of a person's overall body orientation, 3D shape and texture, from overlapping cameras. Overall body orientation (i.e. rotation around torso major axis) is estimated by minimizing the difference between a learned texture model in a canonical

  9. Performances of estimators of linear model with auto-correlated ...

    African Journals Online (AJOL)

    A Monte Carlo Study of the small sampling properties of five estimators of a linear model with Autocorrelated error terms is discussed. The independent variable was specified as standard normal data. The estimators of the slop coefficients β with the help of Ordinary Least Squares (OLS), increased with increased ...

  10. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  11. Review Genetic prediction models and heritability estimates for ...

    African Journals Online (AJOL)

    edward

    2015-05-09

    May 9, 2015 ... cattle in South Africa. Linear models, random regression (RR) models, threshold models (TMs) and ...... Heritability for longevity has been estimated with TMs in Canadian Holsteins (Boettcher et al., 1999),. Spanish ... simulation to incorporate the tri-gamma function (γ) as used by Sasaki et al. (2012) and ...

  12. On mixture model complexity estimation for music recommender systems

    NARCIS (Netherlands)

    Balkema, W.; van der Heijden, Ferdinand; Meijerink, B.

    2006-01-01

    Content-based music navigation systems are in need of robust music similarity measures. Current similarity measures model each song with the same model parameters. We propose methods to efficiently estimate the required number of model parameters of each individual song. First results of a study on

  13. Parameter estimation of electricity spot models from futures prices

    NARCIS (Netherlands)

    Aihara, ShinIchi; Bagchi, Arunabha; Imreizeeq, E.S.N.; Walter, E.

    We consider a slight perturbation of the Schwartz-Smith model for the electricity futures prices and the resulting modified spot model. Using the martingale property of the modified price under the risk neutral measure, we derive the arbitrage free model for the spot and futures prices. We estimate

  14. GLUE Based Uncertainty Estimation of Urban Drainage Modeling Using Weather Radar Precipitation Estimates

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.

    2011-01-01

    the uncertainty of the weather radar rainfall input. The main findings of this work, is that the input uncertainty propagate through the urban drainage model with significant effects on the model result. The GLUE methodology is in general a usable way to explore this uncertainty although; the exact width......Distributed weather radar precipitation measurements are used as rainfall input for an urban drainage model, to simulate the runoff from a small catchment of Denmark. It is demonstrated how the Generalized Likelihood Uncertainty Estimation (GLUE) methodology can be implemented and used to estimate...

  15. An improved model for estimating pesticide emissions for agricultural LCA

    DEFF Research Database (Denmark)

    Dijkman, Teunis Johannes; Birkved, Morten; Hauschild, Michael Zwicky

    2011-01-01

    Credible quantification of chemical emissions in the inventory phase of Life Cycle Assessment (LCA) is crucial since chemicals are the dominating cause of the human and ecotoxicity-related environmental impacts in Life Cycle Impact Assessment (LCIA). When applying LCA for assessment of agricultural...... products, off-target pesticide emissions need to be quantified as accurately as possible because of the considerable toxicity effects associated with chemicals designed to have a high impact on biological organisms like for example insects or weed plants. PestLCI was developed to estimate the fractions....... To overcome these limitations, a reworked and updated version of PestLCI is presented here. The new model includes 16 European climate types and 6 mean European soil characteristic profiles covering all dominant European soil types to widen the geographical scope and to allow contemporary (varying site...

  16. Parameter Estimation for the Thurstone Case III Model.

    Science.gov (United States)

    Mackay, David B.; Chaiy, Seoil

    1982-01-01

    The ability of three estimation criteria to recover parameters of the Thurstone Case V and Case III models from comparative judgment data was investigated via Monte Carlo techniques. Significant differences in recovery are shown to exist. (Author/JKS)

  17. Carbon footprint estimator, phase II : volume I - GASCAP model.

    Science.gov (United States)

    2014-03-01

    The GASCAP model was developed to provide a software tool for analysis of the life-cycle GHG : emissions associated with the construction and maintenance of transportation projects. This phase : of development included techniques for estimating emiss...

  18. Multilevel models improve precision and speed of IC50 estimates.

    Science.gov (United States)

    Vis, Daniel J; Bombardelli, Lorenzo; Lightfoot, Howard; Iorio, Francesco; Garnett, Mathew J; Wessels, Lodewyk Fa

    2016-05-01

    Experimental variation in dose-response data of drugs tested on cell lines result in inaccuracies in the estimate of a key drug sensitivity characteristic: the IC50. We aim to improve the precision of the half-limiting dose (IC50) estimates by simultaneously employing all dose-responses across all cell lines and drugs, rather than using a single drug-cell line response. We propose a multilevel mixed effects model that takes advantage of all available dose-response data. The new estimates are highly concordant with the currently used Bayesian model when the data are well behaved. Otherwise, the multilevel model is clearly superior. The multilevel model yields a significant reduction of extreme IC50 estimates, an increase in precision and it runs orders of magnitude faster.

  19. Parameter estimation in stochastic rainfall-runoff models

    DEFF Research Database (Denmark)

    Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur

    2006-01-01

    A parameter estimation method for stochastic rainfall-runoff models is presented. The model considered in the paper is a conceptual stochastic model, formulated in continuous-discrete state space form. The model is small and a fully automatic optimization is, therefore, possible for estimating all....... For a comparison the parameters are also estimated by an output error method, where the sum of squared simulation error is minimized. The former methodology is optimal for short-term prediction whereas the latter is optimal for simulations. Hence, depending on the purpose it is possible to select whether...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...

  20. Fundamental Frequency and Model Order Estimation Using Spatial Filtering

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    In signal processing applications of harmonic-structured signals, estimates of the fundamental frequency and number of harmonics are often necessary. In real scenarios, a desired signal is contaminated by different levels of noise and interferers, which complicate the estimation of the signal...... extend this procedure to account for inharmonicity using unconstrained model order estimation. The simulations show that beamforming improves the performance of the joint estimates of fundamental frequency and the number of harmonics in low signal to interference (SIR) levels, and an experiment...... on a trumpet signal show the applicability on real signals....

  1. Context Tree Estimation in Variable Length Hidden Markov Models

    OpenAIRE

    Dumont, Thierry

    2011-01-01

    We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exp...

  2. Misspecification in Latent Change Score Models: Consequences for Parameter Estimation, Model Evaluation, and Predicting Change.

    Science.gov (United States)

    Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P

    2018-01-01

    Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.

  3. Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model

    Science.gov (United States)

    Custer, Michael

    2015-01-01

    This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…

  4. The efficiency of OLS estimator in the linear-regression model with ...

    African Journals Online (AJOL)

    Bounds for the efficiency of ordinary least squares estimator relative to generalized least squares estimator in the linear regression model with first-order spatial error process are given. SINET: Ethiopian Journal of Science Vol. 24, No. 1 (June 2001), pp. 17-33. Key words/phrases: Efficiency, generalized least squares, ...

  5. Estimating relative risk of a log-transformed exposure measured in pools

    Science.gov (United States)

    Mitchell, Emily M.; Plowden, Torie C.; Schisterman, Enrique F.

    2016-01-01

    Pooling biospecimens prior to performing laboratory assays is a useful tool to reduce costs, achieve minimum volume requirements, and mitigate assay measurement error. When estimating the risk of a continuous, pooled exposure on a binary outcome, specialized statistical techniques are required. Current methods include a regression calibration approach, where the expectation of the individual-level exposure is calculated by adjusting the observed pooled measurement with additional covariate data. While this method employs a linear regression calibration model, we propose an alternative model that can accommodate log-linear relationships between the exposure and predictive covariates. The proposed model permits direct estimation of the relative risk associated with a log-transformation of an exposure measured in pools. PMID:27530506

  6. Estimation of the Malthusian parameter in an stochastic epidemic model using martingale methods.

    Science.gov (United States)

    Lindenstrand, David; Svensson, Åke

    2013-12-01

    Data, on the number of infected, gathered from a large epidemic outbreak can be used to estimate parameters related to the strength and speed of the spread. The Malthusian parameter, which determines the initial growth rate of the epidemic is often of crucial interest. Using a simple epidemic SEIR model with known generation time distribution, we define and analyze an estimate, based on martingale methods. We derive asymptotic properties of the estimate and compare them to the results from simulations of the epidemic. The estimate uses all the information contained in the epidemic curve, in contrast to estimates which only use data from the start of the outbreak.

  7. Estimation and prediction under local volatility jump-diffusion model

    Science.gov (United States)

    Kim, Namhyoung; Lee, Younhee

    2018-02-01

    Volatility is an important factor in operating a company and managing risk. In the portfolio optimization and risk hedging using the option, the value of the option is evaluated using the volatility model. Various attempts have been made to predict option value. Recent studies have shown that stochastic volatility models and jump-diffusion models reflect stock price movements accurately. However, these models have practical limitations. Combining them with the local volatility model, which is widely used among practitioners, may lead to better performance. In this study, we propose a more effective and efficient method of estimating option prices by combining the local volatility model with the jump-diffusion model and apply it using both artificial and actual market data to evaluate its performance. The calibration process for estimating the jump parameters and local volatility surfaces is divided into three stages. We apply the local volatility model, stochastic volatility model, and local volatility jump-diffusion model estimated by the proposed method to KOSPI 200 index option pricing. The proposed method displays good estimation and prediction performance.

  8. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin

    2017-12-16

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  9. Estimating radiation-induced cancer risk using MVK two-stage model for carcinogenesis

    International Nuclear Information System (INIS)

    Kai, M.; Kusama, T.; Aoki, Y.

    1993-01-01

    Based on the carcinogenesis model as proposed by Moolgavkar et al., time-dependent relative risk models were derived for projecting the time variation in excess relative risk. If it is assumed that each process is described by time-independent linear dose-response relationship, the time variation in excess relative risk is influenced by the parameter related with the promotion process. The risk model based carcinogenesis theory would play a marked role in estimating radiation-induced cancer risk in constructing a projection model or transfer model

  10. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    Science.gov (United States)

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  11. Statistical models for the estimation of the origin-destination matrix from traffic counts

    Directory of Open Access Journals (Sweden)

    Anselmo Ramalho Pitombeira Neto

    2017-12-01

    Full Text Available In transportation planning, one of the first steps is to estimate the travel demand. The final product of the estimation process is an origin-destination (OD matrix, whose entries correspond to the number of trips between pairs of origin-destination zones in a study region. In this paper, we review the main statistical models proposed in the literature for the estimation of the OD matrix based on traffic counts. Unlike reconstruction models, statistical models do not aim at estimating the exact OD matrix corresponding to observed traffic volumes, but they rather aim at estimating the parameters of a statistical model of the population of OD matrices. Initially we define the estimation problem, emphasizing its underspecified nature, which has lead to the development of several models based on different approaches. We describe static models whose parameters are estimated by means of maximum likelihood, the method of moments, and Bayesian inference. We also describe  some recent dynamic models. Following that, we discuss research questions related to the underspecification problem, model assumptions and the estimation of the route choice matrix, and indicate promising research directions.

  12. Relative abundance estimations of chengal tree in a tropical rainforest by using modified Canopy Fractional Cover (mCFC)

    International Nuclear Information System (INIS)

    Hassan, N

    2014-01-01

    Tree species composition estimations are important to sustain forest management. This study challenged estimates of relative abundance of useful timber tree species (chengal) using Hyperion EO-1 satellite data. For the estimation, modified Canopy Fractional Cover (mCFC) was developed using Canopy Fractional Cover (CFC). mCFC was more sensitive to estimate relative abundance of chengal trees rather than Mixture Tuned Matched Filtering (MTMF). Meanwhile, MTMF was more sensitive to estimate the relative abundance of undisturbed forest. Accuracy suggests that the mCFC model is better to explain relative abundance of chengal trees than MTMF. Therefore, it can be concluded that relative abundance of trees species extracted from Hyperion EO-1 satellite data using modified Canopy Fractional Cover is an obtrusive approach used for identifying trees species composition

  13. Relative abundance estimations of Chengal trees in a tropical rainforest by using modified canopy fractional cover (mCFC)

    International Nuclear Information System (INIS)

    Hassan, N

    2014-01-01

    Tree species composition estimations are important to sustain forest management. This study estimates relative abundance of useful timber tree species (chengal) using Hyperion EO-1 satellite data. For the estimation, modified Canopy Fractional Cover (mCFC) was developed using Canopy Fractional Cover (CFC). mCFC was more sensitive to estimate relative abundance of chengal trees rather than Mixture Tuned Matched Filtering (MTMF). Meanwhile, MTMF was more sensitive to estimate the relative abundance of undisturbed forest. Accuracy suggests that the mCFC model is better to explain relative abundance of chengal trees than MTMF. Therefore, it can be concluded that relative abundance of tree species extracted from Hyperion EO-1 satellite data using modified Canopy Fractional Cover is an obtrusive approach used for identifying tree species composition

  14. Estimation of Nonlinear Dynamic Panel Data Models with Individual Effects

    Directory of Open Access Journals (Sweden)

    Yi Hu

    2014-01-01

    Full Text Available This paper suggests a generalized method of moments (GMM based estimation for dynamic panel data models with individual specific fixed effects and threshold effects simultaneously. We extend Hansen’s (Hansen, 1999 original setup to models including endogenous regressors, specifically, lagged dependent variables. To address the problem of endogeneity of these nonlinear dynamic panel data models, we prove that the orthogonality conditions proposed by Arellano and Bond (1991 are valid. The threshold and slope parameters are estimated by GMM, and asymptotic distribution of the slope parameters is derived. Finite sample performance of the estimation is investigated through Monte Carlo simulations. It shows that the threshold and slope parameter can be estimated accurately and also the finite sample distribution of slope parameters is well approximated by the asymptotic distribution.

  15. Models for estimating macronutrients in Mimosa scabrella Bentham

    Directory of Open Access Journals (Sweden)

    Saulo Jorge Téo

    2010-09-01

    Full Text Available The aim of this work was to adjust and test different statistical models for estimating macronutrient content in theabove-ground biomass of bracatinga (Mimosa scabrella Bentham. The data were collected from 25 bracatinga trees, all native to thenorth of the metropolitan region of Curitiba, Paraná state, Brazil. To determine the biomass and macronutrient content, the trees wereseparated into the compartments leaves, branches < 4 cm, branches 4 cm, wood and stem barks. Different statistical models wereadjusted to estimate N, P, K, Ca and Mg contents in the tree compartments, using dendrometric variables as the model independentvariables. Based on the results, the equations developed for estimating macronutrient contents were, in general, satisfactory. The mostaccurate estimates were obtained for the stem biomass compartments and the sum of the biomass compartments. In some cases, theequations had a better performance when crown and stem dimensions, age and dominant height were included as independentvariables.

  16. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian

    2011-01-01

    In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  17. Improving the realism of hydrologic model through multivariate parameter estimation

    Science.gov (United States)

    Rakovec, Oldrich; Kumar, Rohini; Attinger, Sabine; Samaniego, Luis

    2017-04-01

    Increased availability and quality of near real-time observations should improve understanding of predictive skills of hydrological models. Recent studies have shown the limited capability of river discharge data alone to adequately constrain different components of distributed model parameterizations. In this study, the GRACE satellite-based total water storage (TWS) anomaly is used to complement the discharge data with an aim to improve the fidelity of mesoscale hydrologic model (mHM) through multivariate parameter estimation. The study is conducted in 83 European basins covering a wide range of hydro-climatic regimes. The model parameterization complemented with the TWS anomalies leads to statistically significant improvements in (1) discharge simulations during low-flow period, and (2) evapotranspiration estimates which are evaluated against independent (FLUXNET) data. Overall, there is no significant deterioration in model performance for the discharge simulations when complemented by information from the TWS anomalies. However, considerable changes in the partitioning of precipitation into runoff components are noticed by in-/exclusion of TWS during the parameter estimation. A cross-validation test carried out to assess the transferability and robustness of the calibrated parameters to other locations further confirms the benefit of complementary TWS data. In particular, the evapotranspiration estimates show more robust performance when TWS data are incorporated during the parameter estimation, in comparison with the benchmark model constrained against discharge only. This study highlights the value for incorporating multiple data sources during parameter estimation to improve the overall realism of hydrologic model and its applications over large domains. Rakovec, O., Kumar, R., Attinger, S. and Samaniego, L. (2016): Improving the realism of hydrologic model functioning through multivariate parameter estimation. Water Resour. Res., 52, http://dx.doi.org/10

  18. Bayesian Nonparametric Model for Estimating Multistate Travel Time Distribution

    Directory of Open Access Journals (Sweden)

    Emmanuel Kidando

    2017-01-01

    Full Text Available Multistate models, that is, models with more than two distributions, are preferred over single-state probability models in modeling the distribution of travel time. Literature review indicated that the finite multistate modeling of travel time using lognormal distribution is superior to other probability functions. In this study, we extend the finite multistate lognormal model of estimating the travel time distribution to unbounded lognormal distribution. In particular, a nonparametric Dirichlet Process Mixture Model (DPMM with stick-breaking process representation was used. The strength of the DPMM is that it can choose the number of components dynamically as part of the algorithm during parameter estimation. To reduce computational complexity, the modeling process was limited to a maximum of six components. Then, the Markov Chain Monte Carlo (MCMC sampling technique was employed to estimate the parameters’ posterior distribution. Speed data from nine links of a freeway corridor, aggregated on a 5-minute basis, were used to calculate the corridor travel time. The results demonstrated that this model offers significant flexibility in modeling to account for complex mixture distributions of the travel time without specifying the number of components. The DPMM modeling further revealed that freeway travel time is characterized by multistate or single-state models depending on the inclusion of onset and offset of congestion periods.

  19. Bayesian estimation of parameters in a regional hydrological model

    Directory of Open Access Journals (Sweden)

    K. Engeland

    2002-01-01

    Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis

  20. Kernel PLS Estimation of Single-trial Event-related Potentials

    Science.gov (United States)

    Rosipal, Roman; Trejo, Leonard J.

    2004-01-01

    Nonlinear kernel partial least squaes (KPLS) regressior, is a novel smoothing approach to nonparametric regression curve fitting. We have developed a KPLS approach to the estimation of single-trial event related potentials (ERPs). For improved accuracy of estimation, we also developed a local KPLS method for situations in which there exists prior knowledge about the approximate latency of individual ERP components. To assess the utility of the KPLS approach, we compared non-local KPLS and local KPLS smoothing with other nonparametric signal processing and smoothing methods. In particular, we examined wavelet denoising, smoothing splines, and localized smoothing splines. We applied these methods to the estimation of simulated mixtures of human ERPs and ongoing electroencephalogram (EEG) activity using a dipole simulator (BESA). In this scenario we considered ongoing EEG to represent spatially and temporally correlated noise added to the ERPs. This simulation provided a reasonable but simplified model of real-world ERP measurements. For estimation of the simulated single-trial ERPs, local KPLS provided a level of accuracy that was comparable with or better than the other methods. We also applied the local KPLS method to the estimation of human ERPs recorded in an experiment on co,onitive fatigue. For these data, the local KPLS method provided a clear improvement in visualization of single-trial ERPs as well as their averages. The local KPLS method may serve as a new alternative to the estimation of single-trial ERPs and improvement of ERP averages.

  1. Estimated daily salt intake in relation to blood pressure and blood lipids: the role of obesity.

    Science.gov (United States)

    Thuesen, Betina H; Toft, Ulla; Buhelt, Lone P; Linneberg, Allan; Friedrich, Nele; Nauck, Matthias; Wallaschofski, Henri; Jørgensen, Torben

    2015-12-01

    Excessive salt intake causes increased blood pressure which is considered the leading risk for premature death. One major challenge when evaluating associations between daily salt intake and markers of non-communicable diseases is that a high daily salt intake correlates with obesity, which is also a well described risk factor for poor cardiometabolic outcome. The aim of this study was to evaluate the relationship of estimated daily salt intake with blood pressure and blood lipids and to investigate the effect of taking different measures of obesity into account. We included 3294 men and women aged 18-69 years from a general population based study in Copenhagen, Denmark. Estimated 24-hour sodium excretion was calculated by measurements of creatinine and sodium concentration in spot urine in combination with information of sex, age, height and weight. The relations of estimated 24-hour sodium excretion with blood pressure and blood lipids were evaluated by linear regression models. The daily mean estimated intake of salt was 10.80 g and 7.52 g among men and women, respectively. Daily salt intake was significantly associated with blood pressure (β-estimates 1.18 mm Hg/g salt (systolic) and 0.74 mm Hg/g salt (diastolic), p lipids were highly affected by adjustment for obesity. Associations of estimated daily salt intake with blood pressure and blood lipids were highly affected by adjustment for obesity. © The European Society of Cardiology 2014.

  2. Surface-source modeling and estimation using biomagnetic measurements.

    Science.gov (United States)

    Yetik, Imam Samil; Nehorai, Arye; Muravchik, Carlos H; Haueisen, Jens; Eiselt, Michael

    2006-10-01

    We propose a number of electric source models that are spatially distributed on an unknown surface for biomagnetism. These can be useful to model, e.g., patches of electrical activity on the cortex. We use a realistic head (or another organ) model and discuss the special case of a spherical head model with radial sensors resulting in more efficient computations of the estimates for magnetoencephalography. We derive forward solutions, maximum likelihood (ML) estimates, and Cramér-Rao bound (CRB) expressions for the unknown source parameters. A model selection method is applied to decide on the most appropriate model. We also present numerical examples to compare the performances and computational costs of the different models and illustrate when it is possible to distinguish between surface and focal sources or line sources. Finally, we apply our methods to real biomagnetic data of phantom human torso and demonstrate the applicability of them.

  3. Marginal Maximum Likelihood Estimation of Item Response Models in R

    Directory of Open Access Journals (Sweden)

    Matthew S. Johnson

    2007-02-01

    Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.

  4. Estimation of shape model parameters for 3D surfaces

    DEFF Research Database (Denmark)

    Erbou, Søren Gylling Hemmingsen; Darkner, Sune; Fripp, Jurgen

    2008-01-01

    is applied to a database of 3D surfaces from a section of the porcine pelvic bone extracted from 33 CT scans. A leave-one-out validation shows that the parameters of the first 3 modes of the shape model can be predicted with a mean difference within [-0.01,0.02] from the true mean, with a standard deviation......Statistical shape models are widely used as a compact way of representing shape variation. Fitting a shape model to unseen data enables characterizing the data in terms of the model parameters. In this paper a Gauss-Newton optimization scheme is proposed to estimate shape model parameters of 3D...... surfaces using distance maps, which enables the estimation of model parameters without the requirement of point correspondence. For applications with acquisition limitations such as speed and cost, this formulation enables the fitting of a statistical shape model to arbitrarily sampled data. The method...

  5. Bayesian analysis for uncertainty estimation of a canopy transpiration model

    Science.gov (United States)

    Samanta, S.; Mackay, D. S.; Clayton, M. K.; Kruger, E. L.; Ewers, B. E.

    2007-04-01

    A Bayesian approach was used to fit a conceptual transpiration model to half-hourly transpiration rates for a sugar maple (Acer saccharum) stand collected over a 5-month period and probabilistically estimate its parameter and prediction uncertainties. The model used the Penman-Monteith equation with the Jarvis model for canopy conductance. This deterministic model was extended by adding a normally distributed error term. This extension enabled using Markov chain Monte Carlo simulations to sample the posterior parameter distributions. The residuals revealed approximate conformance to the assumption of normally distributed errors. However, minor systematic structures in the residuals at fine timescales suggested model changes that would potentially improve the modeling of transpiration. Results also indicated considerable uncertainties in the parameter and transpiration estimates. This simple methodology of uncertainty analysis would facilitate the deductive step during the development cycle of deterministic conceptual models by accounting for these uncertainties while drawing inferences from data.

  6. [Using log-binomial model for estimating the prevalence ratio].

    Science.gov (United States)

    Ye, Rong; Gao, Yan-hui; Yang, Yi; Chen, Yue

    2010-05-01

    To estimate the prevalence ratios, using a log-binomial model with or without continuous covariates. Prevalence ratios for individuals' attitude towards smoking-ban legislation associated with smoking status, estimated by using a log-binomial model were compared with odds ratios estimated by logistic regression model. In the log-binomial modeling, maximum likelihood method was used when there were no continuous covariates and COPY approach was used if the model did not converge, for example due to the existence of continuous covariates. We examined the association between individuals' attitude towards smoking-ban legislation and smoking status in men and women. Prevalence ratio and odds ratio estimation provided similar results for the association in women since smoking was not common. In men however, the odds ratio estimates were markedly larger than the prevalence ratios due to a higher prevalence of outcome. The log-binomial model did not converge when age was included as a continuous covariate and COPY method was used to deal with the situation. All analysis was performed by SAS. Prevalence ratio seemed to better measure the association than odds ratio when prevalence is high. SAS programs were provided to calculate the prevalence ratios with or without continuous covariates in the log-binomial regression analysis.

  7. Estimation of pure autoregressive vector models for revenue series ...

    African Journals Online (AJOL)

    This paper aims at applying multivariate approach to Box and Jenkins univariate time series modeling to three vector series. General Autoregressive Vector Models with time varying coefficients are estimated. The first vector is a response vector, while others are predictor vectors. By matrix expansion each vector, whether ...

  8. GMM estimation in panel data models with measurement error

    NARCIS (Netherlands)

    Wansbeek, T.J.

    Griliches and Hausman (J. Econom. 32 (1986) 93) have introduced GMM estimation in panel data models with measurement error. We present a simple, systematic approach to derive moment conditions for such models under a variety of assumptions. (C) 2001 Elsevier Science S.A. All rights reserved.

  9. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  10. Estimating classification images with generalized linear and additive models.

    Science.gov (United States)

    Knoblauch, Kenneth; Maloney, Laurence T

    2008-12-22

    Conventional approaches to modeling classification image data can be described in terms of a standard linear model (LM). We show how the problem can be characterized as a Generalized Linear Model (GLM) with a Bernoulli distribution. We demonstrate via simulation that this approach is more accurate in estimating the underlying template in the absence of internal noise. With increasing internal noise, however, the advantage of the GLM over the LM decreases and GLM is no more accurate than LM. We then introduce the Generalized Additive Model (GAM), an extension of GLM that can be used to estimate smooth classification images adaptively. We show that this approach is more robust to the presence of internal noise, and finally, we demonstrate that GAM is readily adapted to estimation of higher order (nonlinear) classification images and to testing their significance.

  11. Bases for the Creation of Electric Energy Price Estimate Model

    International Nuclear Information System (INIS)

    Toljan, I.; Klepo, M.

    1995-01-01

    The paper presents the basic principles for the creation and introduction of a new model for the electric energy price estimate and its significant influence on the tariff system functioning. There is also a review of the model used presently for the electric energy price estimate which is based on the model of objectivized values of electric energy plants and production, transmission and distribution facilities, followed by proposed changes which would result in functional and organizational improvements within the electric energy system as the most complex subsystem of the whole power system. The model is based on substantial and functional connection of the optimization and analysis system with the electric energy economic dispatching, including marginal cost estimate and their influence on the tariff system as the main means in achieving better electric energy system's functioning quality. (author). 10 refs., 2 figs

  12. PARAMETER ESTIMATION AND MODEL SELECTION FOR INDOOR ENVIRONMENTS BASED ON SPARSE OBSERVATIONS

    Directory of Open Access Journals (Sweden)

    Y. Dehbi

    2017-09-01

    Full Text Available This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  13. Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations

    Science.gov (United States)

    Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.

    2017-09-01

    This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  14. The relative efficiency of three methods of estimating herbage mass ...

    African Journals Online (AJOL)

    The methods involved were randomly placed circular quadrats; randomly placed narrow strips; and disc meter sampling. Disc meter and quadrat sampling appear to be more efficient than strip sampling. In a subsequent small plot grazing trial the estimates of herbage mass, using the disc meter, had a consistent precision ...

  15. Some statistical considerations related to the estimation of cancer risk following exposure to ionizing radiation

    International Nuclear Information System (INIS)

    Land, C.E.; Pierce, D.A.

    1983-01-01

    Statistical theory and methodology provide the logical structure for scientific inference about the cancer risk associated with exposure to ionizing radiation. Although much is known about radiation carcinogenesis, the risk associated with low-level exposures is difficult to assess because it is too small to measure directly. Estimation must therefore depend upon mathematical models which relate observed risks at high exposure levels to risks at lower exposure levels. Extrapolated risk estimates obtained using such models are heavily dependent upon assumptions about the shape of the dose-response relationship, the temporal distribution of risk following exposure, and variation of risk according to variables such as age at exposure, sex, and underlying population cancer rates. Expanded statistical models, which make explicit certain assumed relationships between different data sets, can be used to strengthen inferences by incorporating relevant information from diverse sources. They also allow the uncertainties inherent in information from related data sets to be expressed in estimates which partially depend upon that information. To the extent that informed opinion is based upon a valid assessment of scientific data, the larger context of decision theory, which includes statistical theory, provides a logical framework for the incorporation into public policy decisions of the informational content of expert opinion

  16. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four

  17. Unemployment estimation: Spatial point referenced methods and models

    KAUST Repository

    Pereira, Soraia

    2017-06-26

    Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to

  18. Application of Parameter Estimation for Diffusions and Mixture Models

    DEFF Research Database (Denmark)

    Nolsøe, Kim

    error models. This is obtained by constructing an estimating function through projections of some chosen function of Yti+1 onto functions of previous observations Yti ; : : : ; Yt0 . The process of interest Xti+1 is partially observed through a measurement equation Yti+1 = h(Xti+1)+ noice, where h......(:) is restricted to be a polynomial. Through a simulation study we compare for the CIR process the obtained estimator with an estimator derived from utilizing the extended Kalman filter. The simulation study shows that the two estimation methods perform equally well.......The first part of this thesis proposes a method to determine the preferred number of structures, their proportions and the corresponding geometrical shapes of an m-membered ring molecule. This is obtained by formulating a statistical model for the data and constructing an algorithm which samples...

  19. The problematic estimation of "imitation effects" in multilevel models

    Directory of Open Access Journals (Sweden)

    2003-09-01

    Full Text Available It seems plausible that a person's demographic behaviour may be influenced by that among other people in the community, for example because of an inclination to imitate. When estimating multilevel models from clustered individual data, some investigators might perhaps feel tempted to try to capture this effect by simply including on the right-hand side the average of the dependent variable, constructed by aggregation within the clusters. However, such modelling must be avoided. According to simulation experiments based on real fertility data from India, the estimated effect of this obviously endogenous variable can be very different from the true effect. Also the other community effect estimates can be strongly biased. An "imitation effect" can only be estimated under very special assumptions that in practice will be hard to defend.

  20. Parametric model to estimate containment loads following an ex-vessel steam spike

    International Nuclear Information System (INIS)

    Lopez, R.; Hernandez, J.; Huerta, A.

    1998-01-01

    This paper describes the use of a relatively simple parametric model to estimate containment loads following an ex-vessel steam spike. The study was motivated because several PSAs have identified containment loads accompanying reactor vessel failures as a major contributor to early containment failure. The paper includes a detailed description of the simple but physically sound parametric model which was adopted to estimate containment loads following a steam spike into the reactor cavity. (author)

  1. Path sets size, model specification, or model estimation: Which one matters most in predicting stochastic user equilibrium traffic flow?

    Directory of Open Access Journals (Sweden)

    Milad Haghani

    2016-06-01

    Further investigations with respect to the relative importance of STA model estimation (or equivalently, parameter calibration and model specification (or equivalently, error term formulation are also conducted. A paired combinatorial logit (PCL assignment model with an origin–destination-specific-parameter, along with a heuristic method of model estimation (calibration, is proposed. The proposed model cannot only accommodate the correlation between path utilities, but also accounts for the fact that travelling between different origin–destination (O–D pairs can correspond to different levels of stochasticity and choice randomness. Results suggest that the estimation of the stochastic user equilibrium (SUE models can affect the outcome of the flow prediction far more meaningfully than the complexity of the choice model (i.e., model specification.

  2. Development on electromagnetic impedance function modeling and its estimation

    International Nuclear Information System (INIS)

    Sutarno, D.

    2015-01-01

    Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition

  3. Development on electromagnetic impedance function modeling and its estimation

    Energy Technology Data Exchange (ETDEWEB)

    Sutarno, D., E-mail: Sutarno@fi.itb.ac.id [Earth Physics and Complex System Division Faculty of Mathematics and Natural Sciences Institut Teknologi Bandung (Indonesia)

    2015-09-30

    Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition

  4. Models and estimation methods for clinical HIV-1 data

    Science.gov (United States)

    Verotta, Davide

    2005-12-01

    Clinical HIV-1 data include many individual factors, such as compliance to treatment, pharmacokinetics, variability in respect to viral dynamics, race, sex, income, etc., which might directly influence or be associated with clinical outcome. These factors need to be taken into account to achieve a better understanding of clinical outcome and mathematical models can provide a unifying framework to do so. The first objective of this paper is to demonstrate the development of comprehensive HIV-1 dynamics models that describe viral dynamics and also incorporate different factors influencing such dynamics. The second objective of this paper is to describe alternative estimation methods that can be applied to the analysis of data with such models. In particular, we consider: (i) simple but effective two-stage estimation methods, in which data from each patient are analyzed separately and summary statistics derived from the results, (ii) more complex nonlinear mixed effect models, used to pool all the patient data in a single analysis. Bayesian estimation methods are also considered, in particular: (iii) maximum posterior approximations, MAP, and (iv) Markov chain Monte Carlo, MCMC. Bayesian methods incorporate prior knowledge into the models, thus avoiding some of the model simplifications introduced when the data are analyzed using two-stage methods, or a nonlinear mixed effect framework. We demonstrate the development of the models and the different estimation methods using real AIDS clinical trial data involving patients receiving multiple drugs regimens.

  5. Asymptotic distribution theory for break point estimators in models estimated via 2SLS

    NARCIS (Netherlands)

    Boldea, O.; Hall, A.R.; Han, S.

    2012-01-01

    In this paper, we present a limiting distribution theory for the break point estimator in a linear regression model with multiple structural breaks obtained by minimizing a Two Stage Least Squares (2SLS) objective function. Our analysis covers both the case in which the reduced form for the

  6. Advanced empirical estimate of information value for credit scoring models

    Directory of Open Access Journals (Sweden)

    Martin Řezáč

    2011-01-01

    Full Text Available Credit scoring, it is a term for a wide spectrum of predictive models and their underlying techniques that aid financial institutions in granting credits. These methods decide who will get credit, how much credit they should get, and what further strategies will enhance the profitability of the borrowers to the lenders. Many statistical tools are avaiable for measuring quality, within the meaning of the predictive power, of credit scoring models. Because it is impossible to use a scoring model effectively without knowing how good it is, quality indexes like Gini, Kolmogorov-Smirnov statisic and Information value are used to assess quality of given credit scoring model. The paper deals primarily with the Information value, sometimes called divergency. Commonly it is computed by discretisation of data into bins using deciles. One constraint is required to be met in this case. Number of cases have to be nonzero for all bins. If this constraint is not fulfilled there are some practical procedures for preserving finite results. As an alternative method to the empirical estimates one can use the kernel smoothing theory, which allows to estimate unknown densities and consequently, using some numerical method for integration, to estimate value of the Information value. The main contribution of this paper is a proposal and description of the empirical estimate with supervised interval selection. This advanced estimate is based on requirement to have at least k, where k is a positive integer, observations of socres of both good and bad client in each considered interval. A simulation study shows that this estimate outperform both the empirical estimate using deciles and the kernel estimate. Furthermore it shows high dependency on choice of the parameter k. If we choose too small value, we get overestimated value of the Information value, and vice versa. Adjusted square root of number of bad clients seems to be a reasonable compromise.

  7. Model Year 2012 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2011-11-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  8. Model Year 2011 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2010-11-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  9. Model Year 2013 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2012-12-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  10. Model Year 2017 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2016-11-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  11. Model Year 2018 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2017-12-07

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  12. Estimation and variable selection for generalized additive partial linear models

    KAUST Repository

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  13. Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2011-01-01

    , propagated exponentially, can lead to severely sub-optimal plans. Modern optimizers typically maintain one-dimensional statistical summaries and make the attribute value independence and join uniformity assumptions for efficiently estimating selectivities. Therefore, selectivity estimation errors in today......’s optimizers are frequently caused by missed correlations between attributes. We present a selectivity estimation approach that does not make the independence assumptions. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution of all...

  14. Parameter estimation and model selection in computational biology.

    Directory of Open Access Journals (Sweden)

    Gabriele Lillacci

    2010-03-01

    Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.

  15. Occupancy Estimation and Modeling : Inferring Patterns and Dynamics of Species Occurrence

    Science.gov (United States)

    MacKenzie, D.I.; Nichols, J.D.; Royle, J. Andrew; Pollock, K.H.; Bailey, L.L.; Hines, J.E.

    2006-01-01

    This is the first book to examine the latest methods in analyzing presence/absence data surveys. Using four classes of models (single-species, single-season; single-species, multiple season; multiple-species, single-season; and multiple-species, multiple-season), the authors discuss the practical sampling situation, present a likelihood-based model enabling direct estimation of the occupancy-related parameters while allowing for imperfect detectability, and make recommendations for designing studies using these models. It provides authoritative insights into the latest in estimation modeling; discusses multiple models which lay the groundwork for future study designs; addresses critical issues of imperfect detectibility and its effects on estimation; and explores the role of probability in estimating in detail.

  16. Estimation of relative groundwater age in the granite at the Tono research site, central Japan

    Energy Technology Data Exchange (ETDEWEB)

    Iwatsuki, T. E-mail: iwatsuki@tono.jnc.go.jp; Xu, S.; Itoh, S.; Abe, M.; Watanabe, M

    2000-10-01

    Isotopic studies have been conducted to develop a method to estimate subsurface hydraulic conditions in the granite at the Tono research site, central Japan. The groundwaters were classified into three groups according to residence time, based on the {sup 3}H concentration: (1) modern water recharged from the surface within the last 40 years, (2) a mixture between modern water and relatively old groundwaters, and (3) relatively old groundwater which is devoid of {sup 3}H. An attempt was made to date the {sup 3}H-free groundwater using {sup 14}C. The relative groundwater age estimated by simple model, which assumes no addition of dead carbon ranges from 6500 to 14,000 years B.P. The calcite precipitations on the fracture surface are also classified into three groups, based on carbon isotope compositions: (1) precipitation from groundwater within the last 50,000 years, (2) precipitation from the solution with different {delta}{sup 13}C values from groundwater in the past more than 50,000 years, and (3) mixture of new calcite partly precipitated within the last 50,000 years and relatively old calcite. These data suggest the possibility that the conductive (open) fractures connected to the surface can be estimated by {sup 14}C determination in the calcite on fracture surfaces.

  17. Estimation of relative groundwater age in the granite at the Tono research site, central Japan

    Science.gov (United States)

    Iwatsuki, T.; Xu, S.; Itoh, S.; Abe, M.; Watanabe, M.

    2000-10-01

    Isotopic studies have been conducted to develop a method to estimate subsurface hydraulic conditions in the granite at the Tono research site, central Japan. The groundwaters were classified into three groups according to residence time, based on the 3H concentration: (1) modern water recharged from the surface within the last 40 years, (2) a mixture between modern water and relatively old groundwaters, and (3) relatively old groundwater which is devoid of 3H. An attempt was made to date the 3H-free groundwater using 14C. The relative groundwater age estimated by simple model, which assumes no addition of dead carbon ranges from 6500 to 14,000 years B.P. The calcite precipitations on the fracture surface are also classified into three groups, based on carbon isotope compositions: (1) precipitation from groundwater within the last 50,000 years, (2) precipitation from the solution with different δ 13C values from groundwater in the past more than 50,000 years, and (3) mixture of new calcite partly precipitated within the last 50,000 years and relatively old calcite. These data suggest the possibility that the conductive (open) fractures connected to the surface can be estimated by 14C determination in the calcite on fracture surfaces.

  18. Estimating and Forecasting Generalized Fractional Long Memory Stochastic Volatility Models

    Directory of Open Access Journals (Sweden)

    Shelton Peiris

    2017-12-01

    Full Text Available This paper considers a flexible class of time series models generated by Gegenbauer polynomials incorporating the long memory in stochastic volatility (SV components in order to develop the General Long Memory SV (GLMSV model. We examine the corresponding statistical properties of this model, discuss the spectral likelihood estimation and investigate the finite sample properties via Monte Carlo experiments. We provide empirical evidence by applying the GLMSV model to three exchange rate return series and conjecture that the results of out-of-sample forecasts adequately confirm the use of GLMSV model in certain financial applications.

  19. Near Shore Wave Modeling and applications to wave energy estimation

    Science.gov (United States)

    Zodiatis, G.; Galanis, G.; Hayes, D.; Nikolaidis, A.; Kalogeri, C.; Adam, A.; Kallos, G.; Georgiou, G.

    2012-04-01

    The estimation of the wave energy potential at the European coastline is receiving increased attention the last years as a result of the adaptation of novel policies in the energy market, the concernsfor global warming and the nuclear energy security problems. Within this framework, numerical wave modeling systems keep a primary role in the accurate description of wave climate and microclimate that is a prerequisite for any wave energy assessment study. In the present work two of the most popular wave models are used for the estimation of the wave parameters at the coastline of Cyprus: The latest parallel version of the wave model WAM (ECMWF version), which employs new parameterization of shallow water effects, and the SWAN model, classically used for near shore wave simulations. The results obtained from the wave models near shores are studied by an energy estimation point of view: The wave parameters that mainly affect the energy temporal and spatial distribution, that is the significant wave height and the mean wave period, are statistically analyzed,focusing onpossible different aspects captured by the two models. Moreover, the wave spectrum distribution prevailing in different areas are discussed contributing, in this way, to the wave energy assessmentin the area. This work is a part of two European projects focusing on the estimation of the wave energy distribution around Europe: The MARINA platform (http://www.marina-platform.info/ index.aspx) and the Ewave (http://www.oceanography.ucy.ac.cy/ewave/) projects.

  20. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    Science.gov (United States)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  1. A practical model for pressure probe system response estimation (with review of existing models)

    Science.gov (United States)

    Hall, B. F.; Povey, T.

    2018-04-01

    The accurate estimation of the unsteady response (bandwidth) of pneumatic pressure probe systems (probe, line and transducer volume) is a common practical problem encountered in the design of aerodynamic experiments. Understanding the bandwidth of the probe system is necessary to capture unsteady flow features accurately. Where traversing probes are used, the desired traverse speed and spatial gradients in the flow dictate the minimum probe system bandwidth required to resolve the flow. Existing approaches for bandwidth estimation are either complex or inaccurate in implementation, so probes are often designed based on experience. Where probe system bandwidth is characterized, it is often done experimentally, requiring careful experimental set-up and analysis. There is a need for a relatively simple but accurate model for estimation of probe system bandwidth. A new model is presented for the accurate estimation of pressure probe bandwidth for simple probes commonly used in wind tunnel environments; experimental validation is provided. An additional, simple graphical method for air is included for convenience.

  2. Multilevel Autoregressive Mediation Models: Specification, Estimation, and Applications.

    Science.gov (United States)

    Zhang, Qian; Wang, Lijuan; Bergeman, C S

    2017-11-27

    In the current study, extending from the cross-lagged panel models (CLPMs) in Cole and Maxwell (2003), we proposed the multilevel autoregressive mediation models (MAMMs) by allowing the coefficients to differ across individuals. In addition, Level-2 covariates can be included to explain the interindividual differences of mediation effects. Given the complexity of the proposed models, Bayesian estimation was used. Both a CLPM and an unconditional MAMM were fitted to daily diary data. The 2 models yielded different statistical conclusions regarding the average mediation effect. A simulation study was conducted to examine the estimation accuracy of Bayesian estimation for MAMMs and consequences of model mis-specifications. Factors considered included the sample size (N), number of time points (T), fixed indirect and direct effect sizes, and Level-2 variances and covariances. Results indicated that the fixed effect estimates for the indirect effect components (a and b) and the fixed effects of Level-2 covariates were accurate when N ≥ 50 and T ≥ 5. For estimating Level-2 variances and covariances, they were accurate provided a sufficiently large N and T (e.g., N ≥ 500 and T ≥ 50). Estimates of the average mediation effect were generally accurate when N ≥ 100 and T ≥ 10, or N ≥ 50 and T ≥ 20. Furthermore, we found that when Level-2 variances were zero, MAMMs yielded valid inferences about the fixed effects, whereas when random effects existed, CLPMs had low coverage rates for fixed effects. DIC can be used for model selection. Limitations and future directions were discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Coupling Hydrologic and Hydrodynamic Models to Estimate PMF

    Science.gov (United States)

    Felder, G.; Weingartner, R.

    2015-12-01

    Most sophisticated probable maximum flood (PMF) estimations derive the PMF from the probable maximum precipitation (PMP) by applying deterministic hydrologic models calibrated with observed data. This method is based on the assumption that the hydrological system is stationary, meaning that the system behaviour during the calibration period or the calibration event is presumed to be the same as it is during the PMF. However, as soon as a catchment-specific threshold is reached, the system is no longer stationary. At or beyond this threshold, retention areas, new flow paths, and changing runoff processes can strongly affect downstream peak discharge. These effects can be accounted for by coupling hydrologic and hydrodynamic models, a technique that is particularly promising when the expected peak discharge may considerably exceed the observed maximum discharge. In such cases, the coupling of hydrologic and hydraulic models has the potential to significantly increase the physical plausibility of PMF estimations. This procedure ensures both that the estimated extreme peak discharge does not exceed the physical limit based on riverbed capacity and that the dampening effect of inundation processes on peak discharge is considered. Our study discusses the prospect of considering retention effects on PMF estimations by coupling hydrologic and hydrodynamic models. This method is tested by forcing PREVAH, a semi-distributed deterministic hydrological model, with randomly generated, physically plausible extreme precipitation patterns. The resulting hydrographs are then used to externally force the hydraulic model BASEMENT-ETH (riverbed in 1D, potential inundation areas in 2D). Finally, the PMF estimation results obtained using the coupled modelling approach are compared to the results obtained using ordinary hydrologic modelling.

  4. Estimation of the Human Absorption Cross Section Via Reverberation Models

    DEFF Research Database (Denmark)

    Steinböck, Gerhard; Pedersen, Troels; Fleury, Bernard Henri

    2018-01-01

    and compare the obtained results to those of Sabine's model. We find that the absorption by persons is large enough to be measured with a wideband channel sounder and that estimates of the human absorption cross section differ for the two models. The obtained values are comparable to values reported...... in the literature. We also suggest the use of controlled environments with low average absorption coefficients to obtain more reliable estimates. The obtained values can be used to predict the change of reverberation time with persons in the propagation environment. This allows prediction of channel characteristics...... relevant in communication systems, e.g. path loss and rms delay spread, for various population densities....

  5. METHODOLOGY RELATED TO ESTIMATION OF INVESTMENT APPEAL OF RURAL SETTLEMENTS

    Directory of Open Access Journals (Sweden)

    A. S. Voshev

    2010-03-01

    Full Text Available Conditions for production activity vary considerably from region to region, from area to area, from settlement to settlement. In this connection, investors are challenged to choose an optimum site for a new enterprise. To make the decision, investors follow such references as: investment potential and risk level; their interrelation determines investment appeal of a country, region, area, city or rural settlement. At present Russia faces a problem of «black boxes» represented by a lot of rural settlements. No effective and suitable techniques of quantitative estimation of investment potential, rural settlement risks and systems to make the given information accessible for potential investors exist until now.

  6. Estimation of Continuous Velocity Model Variations in Rock Deformation Tests.

    Science.gov (United States)

    Flynn, J. W.; Tomas, R.; Benson, P. M.

    2017-12-01

    Seismic interferometry, using either seismic waves coda or ambient noise, is a passive technique to image the sub-surface seismic velocity structure, which directly relates to the physical properties of the material through which they travel. The methodology estimates the Green's function for the volume between two seismic stations by cross-correlating long time series of ambient noise recorded at both stations, with the Green's function being effectively the seismogram recorded at one station due to an impulsive or instantaneous energy source at the second station. In laboratory rock deformation experiments, changes in the velocity structure of the rock sample are generally measured through active surveys using an array of AE piezoelectric P-wave transducers, producing a time series of ultrasonic velocities in both axial and radial directions. The velocity information from the active surveys is used to provide a time dependent velocity model for the inversion of AE event source locations. These velocity measurements are carried out at regular intervals throughout the laboratory test, causing the interruption of passive AE monitoring for the length of the surveys. There is therefore a trade-off between the frequency at which the active velocity surveys are carried out to optimise the velocity model and the availability of a complete AE record during the rock deformation test.This study proposes to use noise interferometry to provide a continuous measurement of velocity variations in a rock sample during a laboratory rock deformation experiment without the need to carry out active velocity surveys while simultaneously passively monitoring AE activity. The continuous noise source in this test, is an AE transducer fed with a white gaussian noise signal from a function generator. Data from all AE transducers is continuously acquired and recorded during the deformation experiment. The cross correlation of the continuous AE record is used to produce a continuous velocity

  7. Biomass models to estimate carbon stocks for hardwood tree species

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz-Peinado, R.; Montero, G.; Rio, M. del

    2012-11-01

    To estimate forest carbon pools from forest inventories it is necessary to have biomass models or biomass expansion factors. In this study, tree biomass models were developed for the main hardwood forest species in Spain: Alnus glutinosa, Castanea sativa, Ceratonia siliqua, Eucalyptus globulus, Fagus sylvatica, Fraxinus angustifolia, Olea europaea var. sylvestris, Populus x euramericana, Quercus canariensis, Quercus faginea, Quercus ilex, Quercus pyrenaica and Quercus suber. Different tree biomass components were considered: stem with bark, branches of different sizes, above and belowground biomass. For each species, a system of equations was fitted using seemingly unrelated regression, fulfilling the additivity property between biomass components. Diameter and total height were explored as independent variables. All models included tree diameter whereas for the majority of species, total height was only considered in the stem biomass models and in some of the branch models. The comparison of the new biomass models with previous models fitted separately for each tree component indicated an improvement in the accuracy of the models. A mean reduction of 20% in the root mean square error and a mean increase in the model efficiency of 7% in comparison with recently published models. So, the fitted models allow estimating more accurately the biomass stock in hardwood species from the Spanish National Forest Inventory data. (Author) 45 refs.

  8. Heritability estimates for yield and related traits in bread wheat

    International Nuclear Information System (INIS)

    Din, R.; Jehan, S.; Ibraullah, A.

    2009-01-01

    A set of 22 experimental wheat lines along with four check cultivars were evaluated in in-irrigated and unirrgated environments with objectives to determine genetic and phenotypic variation and heritability estimates for yield and its traits- The two environments were statistically at par for physiological maturity, plant height, spikes m/sub -2/. spike lets spike/sup -1/ and 1000-grain weight. Highly significant genetic variability existed among wheat lines (P < 0.0 I) in the combined analysis across two test environments for traits except 1000- grain weight. Genotypes x environment interactions were non-significant for traits indicating consistent performance of lines in two test environments. However lines and check cultivars were two to five days early in maturity under unirrigated environment. Plant height, spikes m/sup -2/ and 1000-grain weight also reduced under unirrigated environments. Genetic variances were greater than Environmental variances for most of traits- Heritability estimates were of higher magnitude (0.74 to 0.96) for plant height, medium (0.31 to 0.56) for physiological maturity. spikelets spike/sup -1/ (unirrigated) and 1000-grain weight, and low for spikes m/sup -2/. (author)

  9. Estimation of oil toxicity using an additive toxicity model

    International Nuclear Information System (INIS)

    French, D.

    2000-01-01

    The impacts to aquatic organisms resulting from acute exposure to aromatic mixtures released from oil spills can be modeled using a newly developed toxicity model. This paper presented a summary of the model development for the toxicity of monoaromatic and polycyclic aromatic hydrocarbon mixtures. This is normally difficult to quantify because oils are mixtures of a variety of hydrocarbons with different toxicities and environmental fates. Also, aromatic hydrocarbons are volatile, making it difficult to expose organism to constant concentrations in bioassay tests. This newly developed and validated model corrects toxicity for time and temperature of exposure. In addition, it estimates the toxicity of each aromatic in the oil-derived mixture. The toxicity of the mixture can be estimated by the weighted sum of the toxicities of the individual compounds. Acute toxicity is estimated as LC50 (lethal concentration to 50 per cent of exposed organisms). Sublethal effects levels are estimated from LC50s. The model was verified with available oil bioassay data. It was concluded that oil toxicity is a function of the aromatic content and composition in the oil as well as the fate and partitioning of those components in the environment. 81 refs., 19 tabs., 1 fig

  10. Parameter Estimation for Single Diode Models of Photovoltaic Modules

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Photovoltaic and Distributed Systems Integration Dept.

    2015-03-01

    Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.

  11. Models for estimating photosynthesis parameters from in situ production profiles

    Science.gov (United States)

    Kovač, Žarko; Platt, Trevor; Sathyendranath, Shubha; Antunović, Suzana

    2017-12-01

    The rate of carbon assimilation in phytoplankton primary production models is mathematically prescribed with photosynthesis irradiance functions, which convert a light flux (energy) into a material flux (carbon). Information on this rate is contained in photosynthesis parameters: the initial slope and the assimilation number. The exactness of parameter values is crucial for precise calculation of primary production. Here we use a model of the daily production profile based on a suite of photosynthesis irradiance functions and extract photosynthesis parameters from in situ measured daily production profiles at the Hawaii Ocean Time-series station Aloha. For each function we recover parameter values, establish parameter distributions and quantify model skill. We observe that the choice of the photosynthesis irradiance function to estimate the photosynthesis parameters affects the magnitudes of parameter values as recovered from in situ profiles. We also tackle the problem of parameter exchange amongst the models and the effect it has on model performance. All models displayed little or no bias prior to parameter exchange, but significant bias following parameter exchange. The best model performance resulted from using optimal parameter values. Model formulation was extended further by accounting for spectral effects and deriving a spectral analytical solution for the daily production profile. The daily production profile was also formulated with time dependent growing biomass governed by a growth equation. The work on parameter recovery was further extended by exploring how to extract photosynthesis parameters from information on watercolumn production. It was demonstrated how to estimate parameter values based on a linearization of the full analytical solution for normalized watercolumn production and from the solution itself, without linearization. The paper complements previous works on photosynthesis irradiance models by analysing the skill and consistency of

  12. Deconvolution Estimation in Measurement Error Models: The R Package decon

    Science.gov (United States)

    Wang, Xiao-Feng; Wang, Bin

    2011-01-01

    Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors-in-variables are two important topics in measurement error models. In this paper, we present a new software package decon for R, which contains a collection of functions that use the deconvolution kernel methods to deal with the measurement error problems. The functions allow the errors to be either homoscedastic or heteroscedastic. To make the deconvolution estimators computationally more efficient in R, we adapt the fast Fourier transform algorithm for density estimation with error-free data to the deconvolution kernel estimation. We discuss the practical selection of the smoothing parameter in deconvolution methods and illustrate the use of the package through both simulated and real examples. PMID:21614139

  13. Parameter estimation for groundwater models under uncertain irrigation data

    Science.gov (United States)

    Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  14. Modeling, estimation and optimal filtration in signal processing

    CERN Document Server

    Najim, Mohamed

    2010-01-01

    The purpose of this book is to provide graduate students and practitioners with traditional methods and more recent results for model-based approaches in signal processing.Firstly, discrete-time linear models such as AR, MA and ARMA models, their properties and their limitations are introduced. In addition, sinusoidal models are addressed.Secondly, estimation approaches based on least squares methods and instrumental variable techniques are presented.Finally, the book deals with optimal filters, i.e. Wiener and Kalman filtering, and adaptive filters such as the RLS, the LMS and the

  15. System Level Modelling and Performance Estimation of Embedded Systems

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer

    The advances seen in the semiconductor industry within the last decade have brought the possibility of integrating evermore functionality onto a single chip forming functionally highly advanced embedded systems. These integration possibilities also imply that as the design complexity increases, so...... an efficient system level design methodology, a modelling framework for performance estimation and design space exploration at the system level is required. This thesis presents a novel component based modelling framework for system level modelling and performance estimation of embedded systems. The framework...... is performed by having the framework produce detailed quantitative information about the system model under investigation. The project is part of the national Danish research project, Danish Network of Embedded Systems (DaNES), which is funded by the Danish National Advanced Technology Foundation. The project...

  16. The Impact of Statistical Leakage Models on Design Yield Estimation

    Directory of Open Access Journals (Sweden)

    Rouwaida Kanj

    2011-01-01

    Full Text Available Device mismatch and process variation models play a key role in determining the functionality and yield of sub-100 nm design. Average characteristics are often of interest, such as the average leakage current or the average read delay. However, detecting rare functional fails is critical for memory design and designers often seek techniques that enable accurately modeling such events. Extremely leaky devices can inflict functionality fails. The plurality of leaky devices on a bitline increase the dimensionality of the yield estimation problem. Simplified models are possible by adopting approximations to the underlying sum of lognormals. The implications of such approximations on tail probabilities may in turn bias the yield estimate. We review different closed form approximations and compare against the CDF matching method, which is shown to be most effective method for accurate statistical leakage modeling.

  17. Estimation of traffic accident costs: a prompted model.

    Science.gov (United States)

    Hejazi, Rokhshad; Shamsudin, Mad Nasir; Radam, Alias; Rahim, Khalid Abdul; Ibrahim, Zelina Zaitun; Yazdani, Saeed

    2013-01-01

    Traffic accidents are the reason for 25% of unnatural deaths in Iran. The main objective of this study is to find a simple model for the estimation of economic costs especially in Islamic countries (like Iran) in a straightforward manner. The model can show the magnitude of traffic accident costs with monetary equivalent. Data were collected from different sources that included traffic police records, insurance companies and hospitals. The conceptual framework, in our study, was based on the method of Ayati. He used this method for the estimation of economic costs in Iran. We promoted his method via minimum variables. Our final model has only three available variables which can be taken from insurance companies and police records. The running model showed that the traffic accident costs were US$2.2 million in 2007 for our case study route.

  18. PDS-Modelling and Regional Bayesian Estimation of Extreme Rainfalls

    DEFF Research Database (Denmark)

    Madsen, Henrik; Rosbjerg, Dan; Harremoës, Poul

    1994-01-01

    Since 1979 a country-wide system of raingauges has been operated in Denmark in order to obtain a better basis for design and analysis of urban drainage systems. As an alternative to the traditional non-parametric approach the Partial Duration Series method is employed in the modelling of extreme ....... The application of the Bayesian approach is derived in case of both exponential and generalized Pareto distributed exceedances. Finally, the aspect of including economic perspectives in the estimation of the design events is briefly discussed....... in Denmark cannot be justified. In order to obtain an estimation procedure at non-monitored sites and to improve at-site estimates a regional Bayesian approach is adopted. The empirical regional distributions of the parameters in the Partial Duration Series model are used as prior information...

  19. Effects of Sample Size, Estimation Methods, and Model Specification on Structural Equation Modeling Fit Indexes.

    Science.gov (United States)

    Fan, Xitao; Wang, Lin; Thompson, Bruce

    1999-01-01

    A Monte Carlo simulation study investigated the effects on 10 structural equation modeling fit indexes of sample size, estimation method, and model specification. Some fit indexes did not appear to be comparable, and it was apparent that estimation method strongly influenced almost all fit indexes examined, especially for misspecified models. (SLD)

  20. Comparison of physically based catchment models for estimating Phosphorus losses

    OpenAIRE

    Nasr, Ahmed Elssidig; Bruen, Michael

    2003-01-01

    As part of a large EPA-funded research project, coordinated by TEAGASC, the Centre for Water Resources Research at UCD reviewed the available distributed physically based catchment models with a potential for use in estimating phosphorous losses for use in implementing the Water Framework Directive. Three models, representative of different levels of approach and complexity, were chosen and were implemented for a number of Irish catchments. This paper reports on (i) the lessons and experience...

  1. Simplified evacuation model for estimating mitigation of early population exposures

    International Nuclear Information System (INIS)

    Strenge, D.L.

    1980-12-01

    The application of a simple evacuation model to the prediction of expected population exposures following acute releases of activity to the atmosphere is described. The evacuation model of Houston is coupled with a normalized Gaussian dispersion calculation to estimate the time integral of population exposure. The methodology described can be applied to specific sites to determine the expected reduction of population exposures due to evacuation

  2. Comparison of two intelligent models to estimate the instantaneous ...

    Indian Academy of Sciences (India)

    ... they are 85.46 (w/m2), 3.08 (w/m2) and 5.41, respectively. As the results indicate, both models are able to estimate the amount of radiation well, while the neural network has a higher accuracy. The output of the modes for six other cities of Iran, with similar climate conditions, also proves the ability of the proposed models.

  3. Estimating a dynamic model of sex selection in China.

    Science.gov (United States)

    Ebenstein, Avraham

    2011-05-01

    High ratios of males to females in China, which have historically concerned researchers (Sen 1990), have increased in the wake of China's one-child policy, which began in 1979. Chinese policymakers are currently attempting to correct the imbalance in the sex ratio through initiatives that provide financial compensation to parents with daughters. Other scholars have advocated a relaxation of the one-child policy to allow more parents to have a son without engaging in sex selection. In this article, I present a model of fertility choice when parents have access to a sex-selection technology and face a mandated fertility limit. By exploiting variation in fines levied in China for unsanctioned births, I estimate the relative price of a son and daughter for mothers observed in China's census data (1982-2000). I find that a couple's first son is worth 1.42 years of income more than a first daughter, and the premium is highest among less-educated mothers and families engaged in agriculture. Simulations indicate that a subsidy of 1 year of income to families without a son would reduce the number of "missing girls" by 67% but impose an annual cost of 1.8% of Chinese gross domestic product (GDP). Alternatively, a three-child policy would reduce the number of "missing girls" by 56% but increase the fertility rate by 35%.

  4. Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2010-01-01

    Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficult...

  5. Time-of-flight estimation based on covariance models

    NARCIS (Netherlands)

    van der Heijden, Ferdinand; Tuquerres, G.; Regtien, Paulus P.L.

    We address the problem of estimating the time-of-flight (ToF) of a waveform that is disturbed heavily by additional reflections from nearby objects. These additional reflections cause interference patterns that are difficult to predict. The introduction of a model for the reflection in terms of a

  6. Empirical Models for the Estimation of Global Solar Radiation in ...

    African Journals Online (AJOL)

    Empirical Models for the Estimation of Global Solar Radiation in Yola, Nigeria. ... and average daily wind speed (WS) for the interval of three years (2010 – 2012) measured using various instruments for Yola of recorded data collected from the Center for Atmospheric Research (CAR), Anyigba are presented and analyzed.

  7. Revised models and genetic parameter estimates for production and ...

    African Journals Online (AJOL)

    Genetic parameters for production and reproduction traits in the Elsenburg Dormer sheep stud were estimated using records of 11743 lambs born between 1943 and 2002. An animal model with direct and maternal additive, maternal permanent and temporary environmental effects was fitted for traits considered traits of the ...

  8. Determining input values for a simple parametric model to estimate ...

    African Journals Online (AJOL)

    Estimating soil evaporation (Es) is an important part of modelling vineyard evapotranspiration for irrigation purposes. Furthermore, quantification of possible soil texture and trellis effects is essential. Daily Es from six topsoils packed into lysimeters was measured under grapevines on slanting and vertical trellises, ...

  9. inverse gaussian model for small area estimation via gibbs sampling

    African Journals Online (AJOL)

    ADMIN

    (1994) extended the work by Fries and. Bhattacharyya (1983) to include the maximum likelihood analysis of the two-factor inverse. Gaussian model for the unbalanced and interaction case for the estimation of small area parameters in finite populations. The object of this article is to develop a Bayesian approach for small ...

  10. An Approach to Quality Estimation in Model-Based Development

    DEFF Research Database (Denmark)

    Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter

    2004-01-01

    We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...

  11. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Rust, John; Schjerning, Bertel

    2015-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). They used an inefficient version of the nested fixed point algorithm that relies on successive app...

  12. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Jinhyuk, Lee; Rust, John

    2016-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). Their implementation of the nested fixed point algorithm used successive approximations to solve t...

  13. Performances of estimators of linear model with auto-correlated ...

    African Journals Online (AJOL)

    Performances of estimators of linear model with auto-correlated error terms when the independent variable is normal. ... On the other hand, the same slope coefficients β , under Generalized Least Squares (GLS) decreased with increased autocorrelation when the sample size T is small. Journal of the Nigerian Association ...

  14. ON THE ESTIMATION AND PREDICTION IN MIXED LINEAR MODELS

    Directory of Open Access Journals (Sweden)

    LÓPEZ L.A.

    1998-01-01

    Full Text Available Beginning with the classical Gauss-Markov Linear Model for mixed effects and using the technique of the Lagrange multipliers to obtain an alternative method for the estimation of linear predictors. A structural method is also discussed in order to obtain the variance and covariance matrixes and their inverses.

  15. Method of moments estimation of GO-GARCH models

    NARCIS (Netherlands)

    Boswijk, H.P.; van der Weide, R.

    2009-01-01

    We propose a new estimation method for the factor loading matrix in generalized orthogonal GARCH (GO-GARCH) models. The method is based on the eigenvectors of a suitably defined sample autocorrelation matrix of squares and cross-products of the process. The method can therefore be easily applied to

  16. Bayesian nonparametric estimation of hazard rate in monotone Aalen model

    Czech Academy of Sciences Publication Activity Database

    Timková, Jana

    2014-01-01

    Roč. 50, č. 6 (2014), s. 849-868 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Aalen model * Bayesian estimation * MCMC Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/SI/timkova-0438210.pdf

  17. Mathematical models for estimating radio channels utilization when ...

    African Journals Online (AJOL)

    Definition of the radio channel utilization indicator is given. Mathematical models for radio channels utilization assessment by real-time flows transfer in the wireless self-organized network are presented. Estimated experiments results according to the average radio channel utilization productivity with and without buffering of ...

  18. Efficient Bayesian Estimation and Combination of GARCH-Type Models

    NARCIS (Netherlands)

    D. David (David); L.F. Hoogerheide (Lennart)

    2010-01-01

    textabstractThis paper proposes an up-to-date review of estimation strategies available for the Bayesian inference of GARCH-type models. The emphasis is put on a novel efficient procedure named AdMitIS. The methodology automatically constructs a mixture of Student-t distributions as an approximation

  19. An improved COCOMO software cost estimation model | Duke ...

    African Journals Online (AJOL)

    In this paper, we discuss the methodologies adopted previously in software cost estimation using the COnstructive COst MOdels (COCOMOs). From our analysis, COCOMOs produce very high software development efforts, which eventually produce high software development costs. Consequently, we propose its extension, ...

  20. Remote sensing estimates of impervious surfaces for pluvial flood modelling

    DEFF Research Database (Denmark)

    Kaspersen, Per Skougaard; Drews, Martin

    This paper investigates the accuracy of medium resolution (MR) satellite imagery in estimating impervious surfaces for European cities at the detail required for pluvial flood modelling. Using remote sensing techniques enables precise and systematic quantification of the influence of the past 30...

  1. Models for estimation of carbon sequestered by Cupressus ...

    African Journals Online (AJOL)

    This study compared models for estimating carbon sequestered aboveground in Cupressus lusitanica plantation stands at Wondo Genet College of Forestry and Natural Resources, Ethiopia. Relationships of carbon storage with tree component and stand age were also investigated. Thirty trees of three different ages (5, ...

  2. Influence of Taxonomic Relatedness and Chemical Mode of Action in Acute Interspecies Estimation Models for Aquatic species

    Science.gov (United States)

    Ecological risks to aquatic organisms are typically assessed using toxicity data for relatively few species and with limited understanding of relative species sensitivity. We developed a comprehensive set of interspecies correlation estimation (ICE) models for aquatic organisms a...

  3. Simultaneously estimating the task-related and stimulus-evoked components of hemodynamic imaging measurements.

    Science.gov (United States)

    Herman, Max Charles; Cardoso, Mariana M B; Lima, Bruss; Sirotin, Yevgeniy B; Das, Aniruddha

    2017-07-01

    Task-related hemodynamic responses contribute prominently to functional magnetic resonance imaging (fMRI) recordings. They reflect behaviorally important brain states, such as arousal and attention, and can dominate stimulus-evoked responses, yet they remain poorly understood. To help characterize these responses, we present a method for parametrically estimating both stimulus-evoked and task-related components of hemodynamic responses from subjects engaged in temporally predictable tasks. The stimulus-evoked component is modeled by convolving a hemodynamic response function (HRF) kernel with spiking. The task-related component is modeled by convolving a Fourier-series task-related function (TRF) kernel with task timing. We fit this model with simultaneous electrode recordings and intrinsic-signal optical imaging from the primary visual cortex of alert, task-engaged monkeys. With high [Formula: see text], the model returns HRFs that are consistent across experiments and recording sites for a given animal and TRFs that entrain to task timing independent of stimulation or local spiking. When the task schedule conflicts with that of stimulation, the TRF remains locked to the task emphasizing its behavioral origins. The current approach is strikingly more robust to fluctuations than earlier ones and gives consistently, if modestly, better fits. This approach could help parse the distinct components of fMRI recordings made in the context of a task.

  4. Eigenspace perturbations for structural uncertainty estimation of turbulence closure models

    Science.gov (United States)

    Jofre, Lluis; Mishra, Aashwin; Iaccarino, Gianluca

    2017-11-01

    With the present state of computational resources, a purely numerical resolution of turbulent flows encountered in engineering applications is not viable. Consequently, investigations into turbulence rely on various degrees of modeling. Archetypal amongst these variable resolution approaches would be RANS models in two-equation closures, and subgrid-scale models in LES. However, owing to the simplifications introduced during model formulation, the fidelity of all such models is limited, and therefore the explicit quantification of the predictive uncertainty is essential. In such scenario, the ideal uncertainty estimation procedure must be agnostic to modeling resolution, methodology, and the nature or level of the model filter. The procedure should be able to give reliable prediction intervals for different Quantities of Interest, over varied flows and flow conditions, and at diametric levels of modeling resolution. In this talk, we present and substantiate the Eigenspace perturbation framework as an uncertainty estimation paradigm that meets these criteria. Commencing from a broad overview, we outline the details of this framework at different modeling resolution. Thence, using benchmark flows, along with engineering problems, the efficacy of this procedure is established. This research was partially supported by NNSA under the Predictive Science Academic Alliance Program (PSAAP) II, and by DARPA under the Enabling Quantification of Uncertainty in Physical Systems (EQUiPS) project (technical monitor: Dr Fariba Fahroo).

  5. The Estimation Modelling of Damaged Areas by Harmful Animals

    Science.gov (United States)

    Jang, R.; Sung, M.; Hwang, J.; Jeon, S. W.

    2017-12-01

    The Republic of Korea has undergone rapid development and urban development without sufficient consideration of the environment. This type of growth is accompanied by a reduction in forest area and wildlife habitat. It is a phenomenon that affects the habitat of large mammals more than small. Especially in Korea, the damage caused by wild boar(Sus scrofa) is harsher than other large mammalian species like water deer(Hydropotes inermis), which also means that the number of these reported cases of this species is higher than ones of other mammals. Wild boar has three to eight cubs per year and it is possible to breed every year, which makes it more populous comparing with the fragmented habitats. It could be regarded as one of the top predators in Korea, which it is inevitable for humans to intervene this creature in population control. In addition, some individuals have been forced to be retreated from other habitats in major habitats, or to invade human activity areas for food activity, thereby destroying crops. Ultimately, this mammal species has been treated as farm pest animals through committing road kills and urban emergences. In this study, we has estimated possible farm pest animal present points from the damage district using 2,505 hazardous wildlife damage areas with four types of geological informations, four kinds of forest information, land cover, and distribution of farmland occurred in Gyeongnam province in Korea. In the estimating model, utilizing MAXENT, information of background point was set to 10,000, 70% of the damaged sites were used to construct the model, 30% was used for verification, and 10 times of crossvalidate were proceeded - verified by AUC of ROC. As a result of analyses, AUC was 0.847, and the percent contribution of the forest information was the distance toward inner-forest areas, 36.1%, the land cover, 16.5%, the distance from the field, 14.9%. Furthermore, the permutation importance was 24.9% of the cover, 12.3% of the height

  6. Coupling diffusion and maximum entropy models to estimate thermal inertia

    Science.gov (United States)

    Thermal inertia is a physical property of soil at the land surface related to water content. We have developed a method for estimating soil thermal inertia using two daily measurements of surface temperature, to capture the diurnal range, and diurnal time series of net radiation and specific humidi...

  7. Estimation of oxide related electron trap energy of porous silicon nanostructures

    International Nuclear Information System (INIS)

    Das, Mainak Mohan; Ray, Mallar; Bandyopadhyay, Nil Ratan; Hossain, Syed Minhaz

    2010-01-01

    Estimation of electron trap energy (E t ), with respect to bulk Si valence band, of oxidized porous silicon (PS) nanostructures is reported. Photoluminescence (PL) spectra of oxidized PS prepared with different formation parameters have been investigated and the room temperature PL characteristics have been successfully explained on the basis of oxide related trap assisted transitions. PL peak energy for the oxidized samples with low porosity exhibited a blue shift with increasing formation current density (J). For the high porosity samples double peaks appeared in the PL spectra. One of these peaks remained constant at ∼730 nm while the other was blue shifted with increase in J. Evolution of PS nanostructure was correlated to the formation parameters using a simple growth mechanism. PS nanostructure was modelled as an array of regular hexagonal pores and the average value of E t was estimated to be 1.67 eV.

  8. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    Science.gov (United States)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  9. Bayes estimation of the general hazard rate model

    International Nuclear Information System (INIS)

    Sarhan, A.

    1999-01-01

    In reliability theory and life testing models, the life time distributions are often specified by choosing a relevant hazard rate function. Here a general hazard rate function h(t)=a+bt c-1 , where c, a, b are constants greater than zero, is considered. The parameter c is assumed to be known. The Bayes estimators of (a,b) based on the data of type II/item-censored testing without replacement are obtained. A large simulation study using Monte Carlo Method is done to compare the performance of Bayes with regression estimators of (a,b). The criterion for comparison is made based on the Bayes risk associated with the respective estimator. Also, the influence of the number of failed items on the accuracy of the estimators (Bayes and regression) is investigated. Estimations for the parameters (a,b) of the linearly increasing hazard rate model h(t)=a+bt, where a, b are greater than zero, can be obtained as the special case, letting c=2

  10. Tyre pressure monitoring using a dynamical model-based estimator

    Science.gov (United States)

    Reina, Giulio; Gentile, Angelo; Messina, Arcangelo

    2015-04-01

    In the last few years, various control systems have been investigated in the automotive field with the aim of increasing the level of safety and stability, avoid roll-over, and customise handling characteristics. One critical issue connected with their integration is the lack of state and parameter information. As an example, vehicle handling depends to a large extent on tyre inflation pressure. When inflation pressure drops, handling and comfort performance generally deteriorate. In addition, it results in an increase in fuel consumption and in a decrease in lifetime. Therefore, it is important to keep tyres within the normal inflation pressure range. This paper introduces a model-based approach to estimate online tyre inflation pressure. First, basic vertical dynamic modelling of the vehicle is discussed. Then, a parameter estimation framework for dynamic analysis is presented. Several important vehicle parameters including tyre inflation pressure can be estimated using the estimated states. This method aims to work during normal driving using information from standard sensors only. On the one hand, the driver is informed about the inflation pressure and he is warned for sudden changes. On the other hand, accurate estimation of the vehicle states is available as possible input to onboard control systems.

  11. Evaluation of black carbon estimations in global aerosol models

    Directory of Open Access Journals (Sweden)

    Y. Zhao

    2009-11-01

    Full Text Available We evaluate black carbon (BC model predictions from the AeroCom model intercomparison project by considering the diversity among year 2000 model simulations and comparing model predictions with available measurements. These model-measurement intercomparisons include BC surface and aircraft concentrations, aerosol absorption optical depth (AAOD retrievals from AERONET and Ozone Monitoring Instrument (OMI and BC column estimations based on AERONET. In regions other than Asia, most models are biased high compared to surface concentration measurements. However compared with (column AAOD or BC burden retreivals, the models are generally biased low. The average ratio of model to retrieved AAOD is less than 0.7 in South American and 0.6 in African biomass burning regions; both of these regions lack surface concentration measurements. In Asia the average model to observed ratio is 0.7 for AAOD and 0.5 for BC surface concentrations. Compared with aircraft measurements over the Americas at latitudes between 0 and 50N, the average model is a factor of 8 larger than observed, and most models exceed the measured BC standard deviation in the mid to upper troposphere. At higher latitudes the average model to aircraft BC ratio is 0.4 and models underestimate the observed BC loading in the lower and middle troposphere associated with springtime Arctic haze. Low model bias for AAOD but overestimation of surface and upper atmospheric BC concentrations at lower latitudes suggests that most models are underestimating BC absorption and should improve estimates for refractive index, particle size, and optical effects of BC coating. Retrieval uncertainties and/or differences with model diagnostic treatment may also contribute to the model-measurement disparity. Largest AeroCom model diversity occurred in northern Eurasia and the remote Arctic, regions influenced by anthropogenic sources. Changing emissions, aging, removal, or optical properties within a single model

  12. Comparison of wildfire smoke estimation methods and associations with cardiopulmonary-related hospital admissions

    Science.gov (United States)

    Gan, Ryan W.; Ford, Bonne; Lassman, William; Pfister, Gabriele; Vaidyanathan, Ambarish; Fischer, Emily; Volckens, John; Pierce, Jeffrey R.; Magzamen, Sheryl

    2017-01-01

    Climate forecasts predict an increase in frequency and intensity of wildfires. Associations between health outcomes and population exposure to smoke from Washington 2012 wildfires were compared using surface monitors, chemical-weather models, and a novel method blending three exposure information sources. The association between smoke particulate matter ≤2.5 μm in diameter (PM2.5) and cardiopulmonary hospital admissions occurring in Washington from 1 July to 31 October 2012 was evaluated using a time-stratified case-crossover design. Hospital admissions aggregated by ZIP code were linked with population-weighted daily average concentrations of smoke PM2.5 estimated using three distinct methods: a simulation with the Weather Research and Forecasting with Chemistry (WRF-Chem) model, a kriged interpolation of PM2.5 measurements from surface monitors, and a geographically weighted ridge regression (GWR) that blended inputs from WRF-Chem, satellite observations of aerosol optical depth, and kriged PM2.5. A 10 μg/m3 increase in GWR smoke PM2.5 was associated with an 8% increased risk in asthma-related hospital admissions (odds ratio (OR): 1.076, 95% confidence interval (CI): 1.019–1.136); other smoke estimation methods yielded similar results. However, point estimates for chronic obstructive pulmonary disease (COPD) differed by smoke PM2.5 exposure method: a 10 μg/m3 increase using GWR was significantly associated with increased risk of COPD (OR: 1.084, 95%CI: 1.026–1.145) and not significant using WRF-Chem (OR: 0.986, 95%CI: 0.931–1.045). The magnitude (OR) and uncertainty (95%CI) of associations between smoke PM2.5 and hospital admissions were dependent on estimation method used and outcome evaluated. Choice of smoke exposure estimation method used can impact the overall conclusion of the study. PMID:28868515

  13. Leaf Relative Water Content Estimated from Leaf Reflectance and Transmittance

    Science.gov (United States)

    Vanderbilt, Vern; Daughtry, Craig; Dahlgren, Robert

    2016-01-01

    Remotely sensing the water status of plants and the water content of canopies remain long term goals of remote sensing research. In the research we report here, we used optical polarization techniques to monitor the light reflected from the leaf interior, R, as well as the leaf transmittance, T, as the relative water content (RWC) of corn (Zea mays) leaves decreased. Our results show that R and T both change nonlinearly. The result show that the nonlinearities cancel in the ratio R/T, which appears linearly related to RWC for RWC less than 90%. The results suggest that potentially leaf water status and perhaps even canopy water status could be monitored starting from leaf and canopy optical measurements.

  14. An Estimation of the Influence of Relational Factors on Loyalty

    OpenAIRE

    Petruºcã Claudia Ioana

    2012-01-01

    The main objective of relationship marketing is to establish and maintain long-term relationships that translate into customer loyalty. Following the above introduction of describing the significance of customer loyalty, this article discusses the conceptualisation of relational determinants of loyalty within the three common approaches used in the loyalty literature, for business to business market. This represents the theoretical framework that is the basis for formulating the research hypo...

  15. Efficient Estimation of Non-Linear Dynamic Panel Data Models with Application to Smooth Transition Models

    DEFF Research Database (Denmark)

    Gørgens, Tue; Skeels, Christopher L.; Wurtz, Allan

    This paper explores estimation of a class of non-linear dynamic panel data models with additive unobserved individual-specific effects. The models are specified by moment restrictions. The class includes the panel data AR(p) model and panel smooth transition models. We derive an efficient set...... of moment restrictions for estimation and apply the results to estimation of panel smooth transition models with fixed effects, where the transition may be determined endogenously. The performance of the GMM estimator, both in terms of estimation precision and forecasting performance, is examined in a Monte...... Carlo experiment. We find that estimation of the parameters in the transition function can be problematic but that there may be significant benefits in terms of forecast performance....

  16. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    Science.gov (United States)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter

  17. Parameter estimation in nonlinear models for pesticide degradation

    International Nuclear Information System (INIS)

    Richter, O.; Pestemer, W.; Bunte, D.; Diekkrueger, B.

    1991-01-01

    A wide class of environmental transfer models is formulated as ordinary or partial differential equations. With the availability of fast computers, the numerical solution of large systems became feasible. The main difficulty in performing a realistic and convincing simulation of the fate of a substance in the biosphere is not the implementation of numerical techniques but rather the incomplete data basis for parameter estimation. Parameter estimation is a synonym for statistical and numerical procedures to derive reasonable numerical values for model parameters from data. The classical method is the familiar linear regression technique which dates back to the 18th century. Because it is easy to handle, linear regression has long been established as a convenient tool for analysing relationships. However, the wide use of linear regression has led to an overemphasis of linear relationships. In nature, most relationships are nonlinear and linearization often gives a poor approximation of reality. Furthermore, pure regression models are not capable to map the dynamics of a process. Therefore, realistic models involve the evolution in time (and space). This leads in a natural way to the formulation of differential equations. To establish the link between data and dynamical models, numerical advanced parameter identification methods have been developed in recent years. This paper demonstrates the application of these techniques to estimation problems in the field of pesticide dynamics. (7 refs., 5 figs., 2 tabs.)

  18. Novel mathematical model to estimate ball impact force in soccer.

    Science.gov (United States)

    Iga, Takahito; Nunome, Hiroyuki; Sano, Shinya; Sato, Nahoko; Ikegami, Yasuo

    2017-11-22

    To assess ball impact force during soccer kicking is important to quantify from both performance and chronic injury prevention perspectives. We aimed to verify the appropriateness of previous models used to estimate ball impact force and to propose an improved model to better capture the time history of ball impact force. A soccer ball was fired directly onto a force platform (10 kHz) at five realistic kicking ball velocities and ball behaviour was captured by a high-speed camera (5,000 Hz). The time history of ball impact force was estimated using three existing models and two new models. A new mathematical model that took into account a rapid change in ball surface area and heterogeneous ball deformation showed a distinctive advantage to estimate the peak forces and its occurrence times and to reproduce time history of ball impact forces more precisely, thereby reinforcing the possible mechanics of 'footballer's ankle'. Ball impact time was also systematically shortened when ball velocity increases in contrast to practical understanding for producing faster ball velocity, however, the aspect of ball contact time must be considered carefully from practical point of view.

  19. A Bayesian Markov geostatistical model for estimation of hydrogeological properties

    International Nuclear Information System (INIS)

    Rosen, L.; Gustafson, G.

    1996-01-01

    A geostatistical methodology based on Markov-chain analysis and Bayesian statistics was developed for probability estimations of hydrogeological and geological properties in the siting process of a nuclear waste repository. The probability estimates have practical use in decision-making on issues such as siting, investigation programs, and construction design. The methodology is nonparametric which makes it possible to handle information that does not exhibit standard statistical distributions, as is often the case for classified information. Data do not need to meet the requirements on additivity and normality as with the geostatistical methods based on regionalized variable theory, e.g., kriging. The methodology also has a formal way for incorporating professional judgments through the use of Bayesian statistics, which allows for updating of prior estimates to posterior probabilities each time new information becomes available. A Bayesian Markov Geostatistical Model (BayMar) software was developed for implementation of the methodology in two and three dimensions. This paper gives (1) a theoretical description of the Bayesian Markov Geostatistical Model; (2) a short description of the BayMar software; and (3) an example of application of the model for estimating the suitability for repository establishment with respect to the three parameters of lithology, hydraulic conductivity, and rock quality designation index (RQD) at 400--500 meters below ground surface in an area around the Aespoe Hard Rock Laboratory in southeastern Sweden

  20. Negative binomial models for abundance estimation of multiple closed populations

    Science.gov (United States)

    Boyce, Mark S.; MacKenzie, Darry I.; Manly, Bryan F.J.; Haroldson, Mark A.; Moody, David W.

    2001-01-01

    Counts of uniquely identified individuals in a population offer opportunities to estimate abundance. However, for various reasons such counts may be burdened by heterogeneity in the probability of being detected. Theoretical arguments and empirical evidence demonstrate that the negative binomial distribution (NBD) is a useful characterization for counts from biological populations with heterogeneity. We propose a method that focuses on estimating multiple populations by simultaneously using a suite of models derived from the NBD. We used this approach to estimate the number of female grizzly bears (Ursus arctos) with cubs-of-the-year in the Yellowstone ecosystem, for each year, 1986-1998. Akaike's Information Criteria (AIC) indicated that a negative binomial model with a constant level of heterogeneity across all years was best for characterizing the sighting frequencies of female grizzly bears. A lack-of-fit test indicated the model adequately described the collected data. Bootstrap techniques were used to estimate standard errors and 95% confidence intervals. We provide a Monte Carlo technique, which confirms that the Yellowstone ecosystem grizzly bear population increased during the period 1986-1998.

  1. Comparison of regression models for estimation of isometric wrist joint torques using surface electromyography

    Directory of Open Access Journals (Sweden)

    Menon Carlo

    2011-09-01

    Full Text Available Abstract Background Several regression models have been proposed for estimation of isometric joint torque using surface electromyography (SEMG signals. Common issues related to torque estimation models are degradation of model accuracy with passage of time, electrode displacement, and alteration of limb posture. This work compares the performance of the most commonly used regression models under these circumstances, in order to assist researchers with identifying the most appropriate model for a specific biomedical application. Methods Eleven healthy volunteers participated in this study. A custom-built rig, equipped with a torque sensor, was used to measure isometric torque as each volunteer flexed and extended his wrist. SEMG signals from eight forearm muscles, in addition to wrist joint torque data were gathered during the experiment. Additional data were gathered one hour and twenty-four hours following the completion of the first data gathering session, for the purpose of evaluating the effects of passage of time and electrode displacement on accuracy of models. Acquired SEMG signals were filtered, rectified, normalized and then fed to models for training. Results It was shown that mean adjusted coefficient of determination (Ra2 values decrease between 20%-35% for different models after one hour while altering arm posture decreased mean Ra2 values between 64% to 74% for different models. Conclusions Model estimation accuracy drops significantly with passage of time, electrode displacement, and alteration of limb posture. Therefore model retraining is crucial for preserving estimation accuracy. Data resampling can significantly reduce model training time without losing estimation accuracy. Among the models compared, ordinary least squares linear regression model (OLS was shown to have high isometric torque estimation accuracy combined with very short training times.

  2. The Figure 8 Model of International Relations

    National Research Council Canada - National Science Library

    Sibayan, Jerome T

    2008-01-01

    .... The Figure 8 Model is presented first in a Cartesian format and then in geometrical form. This model is an intuitive idea based on a particular reading of history rather than a new international relations theory...

  3. Existing Model Metrics and Relations to Model Quality

    OpenAIRE

    Mohagheghi, Parastoo; Dehlen, Vegard

    2009-01-01

    This paper presents quality goals for models and provides a state-of-the-art analysis regarding model metrics. While model-based software development often requires assessing the quality of models at different abstraction and precision levels and developed for multiple purposes, existing work on model metrics do not reflect this need. Model size metrics are descriptive and may be used for comparing models but their relation to model quality is not welldefined. Code metrics are proposed to be ...

  4. Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models

    Science.gov (United States)

    Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea

    2014-05-01

    Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.

  5. Spacecraft Formation Control and Estimation Via Improved Relative Motion Dynamics

    Science.gov (United States)

    2017-03-30

    represent  tx as           tv tr tx (2) The instantaneous Line-of-sight (LOS) from observer to RSO is the unit vector along the relative...tRK a tV Geo     (38) where K̂  is the unit vector in the direction of Earth’s polar axis. To select from the two possible parallel directions...Space-Based Visible Program,” Lincoln Laboratory Journal , Vol. 11, No. 2, pp. 205––238, 1998. 7. Fujimoto, K. and Scheeres, D. J.; “Short-Arc

  6. On GPS Water Vapour estimation and related errors

    Science.gov (United States)

    Antonini, Andrea; Ortolani, Alberto; Rovai, Luca; Benedetti, Riccardo; Melani, Samantha

    2010-05-01

    Water vapour (WV) is one of the most important constituents of the atmosphere: it plays a crucial role in the earth's radiation budget in the absorption processes both of the incoming shortwave and the outgoing longwave radiation; it is one of the main greenhouse gases of the atmosphere, by far the one with higher concentration. In addition moisture and latent heat are transported through the WV phase, which is one of the driving factor of the weather dynamics, feeding the cloud systems evolution. An accurate, dense and frequent sampling of WV at different scales, is consequently of great importance for climatology and meteorology research as well as operational weather forecasting. Since the development of the satellite positioning systems, it has been clear that the troposphere and its WV content were a source of delay in the positioning signal, in other words a source of error in the positioning process or in turn a source of information in meteorology. The use of the GPS (Global Positioning System) signal for WV estimation has increased in recent years, starting from measurements collected from a ground-fixed dual frequency GPS geodetic station. This technique for processing the GPS data is based on measuring the signal travel time in the satellite-receiver path and then processing such signal to filter out all delay contributions except the tropospheric one. Once the troposheric delay is computed, the wet and dry part are decoupled under some hypotheses on the tropospheric structure and/or through ancillary information on pressure and temperature. The processing chain normally aims at producing a vertical Integrated Water Vapour (IWV) value. The other non troposheric delays are due to ionospheric free electrons, relativistic effects, multipath effects, transmitter and receiver instrumental biases, signal bending. The total effect is a delay in the signal travel time with respect to the geometrical straight path. The GPS signal has the advantage to be nearly

  7. Model Year 2014 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2013-12-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  8. Model Year 2010 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2009-10-14

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  9. Model Year 2016 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2015-11-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  10. Model Year 2015 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2014-12-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  11. Model Year 2005 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2004-11-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  12. Model Year 2006 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2005-11-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  13. Modeling extreme events: Sample fraction adaptive choice in parameter estimation

    Science.gov (United States)

    Neves, Manuela; Gomes, Ivette; Figueiredo, Fernanda; Gomes, Dora Prata

    2012-09-01

    When modeling extreme events there are a few primordial parameters, among which we refer the extreme value index and the extremal index. The extreme value index measures the right tail-weight of the underlying distribution and the extremal index characterizes the degree of local dependence in the extremes of a stationary sequence. Most of the semi-parametric estimators of these parameters show the same type of behaviour: nice asymptotic properties, but a high variance for small values of k, the number of upper order statistics to be used in the estimation, and a high bias for large values of k. This shows a real need for the choice of k. Choosing some well-known estimators of those parameters we revisit the application of a heuristic algorithm for the adaptive choice of k. The procedure is applied to some simulated samples as well as to some real data sets.

  14. Model Year 2009 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2008-10-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  15. Model Year 2008 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2007-10-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  16. Model Year 2007 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2007-10-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  17. Modeling of Closed-Die Forging for Estimating Forging Load

    Science.gov (United States)

    Sheth, Debashish; Das, Santanu; Chatterjee, Avik; Bhattacharya, Anirban

    2017-02-01

    Closed die forging is one common metal forming process used for making a range of products. Enough load is to exert on the billet for deforming the material. This forging load is dependent on work material property and frictional characteristics of the work material with the punch and die. Several researchers worked on estimation of forging load for specific products under different process variables. Experimental data on deformation resistance and friction were used to calculate the load. In this work, theoretical estimation of forging load is made to compare this value with that obtained through LS-DYNA model facilitating the finite element analysis. Theoretical work uses slab method to assess forging load for an axi-symmetric upsetting job made of lead. Theoretical forging load estimate shows slightly higher value than the experimental one; however, simulation shows quite close matching with experimental forging load, indicating possibility of wide use of this simulation software.

  18. Nonparametric Estimation of Distributions in Random Effects Models

    KAUST Repository

    Hart, Jeffrey D.

    2011-01-01

    We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.

  19. A probabilistic model for estimating the waiting time until the simultaneous collapse of two contingencies

    International Nuclear Information System (INIS)

    Barnett, C.S.

    1991-01-01

    The Double Contingency Principle (DCP) is widely applied to criticality safety practice in the United States. Most practitioners base their application of the principle on qualitative, intuitive assessments. The recent trend toward probabilistic safety assessments provides a motive to search for a quantitative, probabilistic foundation for the DCP. A Markov model is tractable and leads to relatively simple results. The model yields estimates of mean time to simultaneous collapse of two contingencies as a function of estimates of mean failure times and mean recovery times of two independent contingencies. The model is a tool that can be used to supplement the qualitative methods now used to assess effectiveness of the DCP. (Author)

  20. A unified framework for benchmark dose estimation applied to mixed models and model averaging

    DEFF Research Database (Denmark)

    Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.

    2013-01-01

    This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...... continuous and quantal data, facilitating benchmark dose estimation in general for a wide range of candidate models commonly used in toxicology. Moreover, the proposed framework provides a convenient means for extending benchmark dose concepts through the use of model averaging and random effects modeling...... provides slightly conservative, yet useful, estimates of benchmark dose lower limit under realistic scenarios....

  1. A screening-level modeling approach to estimate nitrogen ...

    Science.gov (United States)

    This paper presents a screening-level modeling approach that can be used to rapidly estimate nutrient loading and assess numerical nutrient standard exceedance risk of surface waters leading to potential classification as impaired for designated use. It can also be used to explore best management practice (BMP) implementation to reduce loading. The modeling framework uses a hybrid statistical and process based approach to estimate source of pollutants, their transport and decay in the terrestrial and aquatic parts of watersheds. The framework is developed in the ArcGIS environment and is based on the total maximum daily load (TMDL) balance model. Nitrogen (N) is currently addressed in the framework, referred to as WQM-TMDL-N. Loading for each catchment includes non-point sources (NPS) and point sources (PS). NPS loading is estimated using export coefficient or event mean concentration methods depending on the temporal scales, i.e., annual or daily. Loading from atmospheric deposition is also included. The probability of a nutrient load to exceed a target load is evaluated using probabilistic risk assessment, by including the uncertainty associated with export coefficients of various land uses. The computed risk data can be visualized as spatial maps which show the load exceedance probability for all stream segments. In an application of this modeling approach to the Tippecanoe River watershed in Indiana, USA, total nitrogen (TN) loading and risk of standard exce

  2. Forward models and state estimation in compensatory eye movements

    Directory of Open Access Journals (Sweden)

    Maarten A Frens

    2009-11-01

    Full Text Available The compensatory eye movement system maintains a stable retinal image, integrating information from different sensory modalities to compensate for head movements. Inspired by recent models of physiology of limb movements, we suggest that compensatory eye movements (CEM can be modeled as a control system with three essential building blocks: a forward model that predicts the effects of motor commands; a state estimator that integrates sensory feedback into this prediction; and, a feedback controller that translates a state estimate into motor commands. We propose a specific mapping of nuclei within the CEM system onto these control functions. Specifically, we suggest that the Flocculus is responsible for generating the forward model prediction and that the Vestibular Nuclei integrate sensory feedback to generate an estimate of current state. Finally, the brainstem motor nuclei – in the case of horizontal compensation this means the Abducens Nucleus and the Nucleus Prepositus Hypoglossi – implement a feedback controller, translating state into motor commands. While these efforts to understand the physiological control system as a feedback control system are in their infancy, there is the intriguing possibility that compensatory eye movements and targeted voluntary movements use the same cerebellar circuitry in fundamentally different ways.

  3. The toxicology of heroin-related death: estimating survival times.

    Science.gov (United States)

    Darke, Shane; Duflou, Johan

    2016-09-01

    The feasibility of intervention in heroin overdose is of clinical importance. The presence of 6-monoacetyl morphine (6MAM) in the blood is suggestive of survival times of less than 20-30 minutes following heroin administration. The study aimed to determine the proportions of cases in which 6MAM was present, and compare concentrations of secondary metabolites and circumstances of death by 6MAM status. Analysis of cases of heroin-related death presenting to the Department of Forensic Medicine Sydney, 1 January 2013-12 December 2014. Sydney, Australia. A total of 145 cases. The mean age was 40.5 years and 81% were male. Concentrations of 6MAM, free morphine, morphine-3-glucuronide (M3G) and morphine-6-glucuronide (M6G). Circumstances of death included bronchopneumonia, apparent sudden collapse, location and other central nervous system (CNS) depressants. 6MAM was detected in 43% [confidence interval (CI) = 35-51%] of cases. The median free morphine concentration of 6MAM-positive cases was more than twice that of cases without 6MAM (0.26 versus 0.12 mg/l). 6MAM-positive cases also had lower concentrations of the other major heroin metabolites: M3G (0.05 versus 0.29 mg/l), M6G (0.02 versus 0.05 mg/l) with correspondingly lower M3G/morphine (0.54 versus 2.71) and M6G/morphine (0.05 versus 0.50) ratios. Significant independent correlates of 6MAM were a higher free morphine concentration [odds ratio (OR) = 1.7], a lower M6G/free morphine ratio (OR = 0.5) and signs of apparent collapse (OR = 6.7). In heroin-related deaths in Sydney, Australia during 2013 and 2014, 6- monoacetyl morphine was present in the blood in less than half of cases, suggesting that a minority of cases had survival times after overdose of less than 20-30 minutes. The toxicology of heroin metabolites and the circumstances of death were consistent with 6- monoacetyl morphine as a proxy for a more rapid death. © 2016 Society for the Study of Addiction.

  4. A method for model identification and parameter estimation

    International Nuclear Information System (INIS)

    Bambach, M; Heinkenschloss, M; Herty, M

    2013-01-01

    We propose and analyze a new method for the identification of a parameter-dependent model that best describes a given system. This problem arises, for example, in the mathematical modeling of material behavior where several competing constitutive equations are available to describe a given material. In this case, the models are differential equations that arise from the different constitutive equations, and the unknown parameters are coefficients in the constitutive equations. One has to determine the best-suited constitutive equations for a given material and application from experiments. We assume that the true model is one of the N possible parameter-dependent models. To identify the correct model and the corresponding parameters, we can perform experiments, where for each experiment we prescribe an input to the system and observe a part of the system state. Our approach consists of two stages. In the first stage, for each pair of models we determine the experiment, i.e. system input and observation, that best differentiates between the two models, and measure the distance between the two models. Then we conduct N(N − 1) or, depending on the approach taken, N(N − 1)/2 experiments and use the result of the experiments as well as the previously computed model distances to determine the true model. We provide sufficient conditions on the model distances and measurement errors which guarantee that our approach identifies the correct model. Given the model, we identify the corresponding model parameters in the second stage. The problem in the second stage is a standard parameter estimation problem and we use a method suitable for the given application. We illustrate our approach on three examples, including one where the models are elliptic partial differential equations with different parameterized right-hand sides and an example where we identify the constitutive equation in a problem from computational viscoplasticity. (paper)

  5. Sparse estimation of model-based diffuse thermal dust emission

    Science.gov (United States)

    Irfan, Melis O.; Bobin, Jérôme

    2018-03-01

    Component separation for the Planck High Frequency Instrument (HFI) data is primarily concerned with the estimation of thermal dust emission, which requires the separation of thermal dust from the cosmic infrared background (CIB). For that purpose, current estimation methods rely on filtering techniques to decouple thermal dust emission from CIB anisotropies, which tend to yield a smooth, low-resolution, estimation of the dust emission. In this paper, we present a new parameter estimation method, premise: Parameter Recovery Exploiting Model Informed Sparse Estimates. This method exploits the sparse nature of thermal dust emission to calculate all-sky maps of thermal dust temperature, spectral index, and optical depth at 353 GHz. premise is evaluated and validated on full-sky simulated data. We find the percentage difference between the premise results and the true values to be 2.8, 5.7, and 7.2 per cent at the 1σ level across the full sky for thermal dust temperature, spectral index, and optical depth at 353 GHz, respectively. A comparison between premise and a GNILC-like method over selected regions of our sky simulation reveals that both methods perform comparably within high signal-to-noise regions. However, outside of the Galactic plane, premise is seen to outperform the GNILC-like method with increasing success as the signal-to-noise ratio worsens.

  6. Model for Estimation of Fuel Consumption of Cruise Ships

    Directory of Open Access Journals (Sweden)

    Morten Simonsen

    2018-04-01

    Full Text Available This article presents a model to estimate the energy use and fuel consumption of cruise ships that sail Norwegian waters. Automatic identification system (AIS data and technical information about cruise ships provided input to the model, including service speed, total power, and number of engines. The model was tested against real-world data obtained from a small cruise vessel and both a medium and large cruise ship. It is sensitive to speed and the corresponding engine load profile of the ship. A crucial determinate for total fuel consumption is also associated with hotel functions, which can make a large contribution to the overall energy use of cruise ships. Real-world data fits the model best when ship speed is 70–75% of service speed. With decreased or increased speed, the model tends to diverge from real-world observations. The model gives a proxy for calculation of fuel consumption associated with cruise ships that sail to Norwegian waters and can be used to estimate greenhouse gas emissions and to evaluate energy reduction strategies for cruise ships.

  7. Instrumental variables estimation under a structural Cox model

    DEFF Research Database (Denmark)

    Martinussen, Torben; Nørbo Sørensen, Ditte; Vansteelandt, Stijn

    2017-01-01

    Instrumental variable (IV) analysis is an increasingly popular tool for inferring the effect of an exposure on an outcome, as witnessed by the growing number of IV applications in epidemiology, for instance. The majority of IV analyses of time-to-event endpoints are, however, dominated by heuristic...... and instruments. We propose a novel class of estimators and derive their asymptotic properties. The methodology is illustrated using two real data applications, and using simulated data....... approaches. More rigorous proposals have either sidestepped the Cox model, or considered it within a restrictive context with dichotomous exposure and instrument, amongst other limitations. The aim of this article is to reconsider IV estimation under a structural Cox model, allowing for arbitrary exposure...

  8. Propagation channel characterization, parameter estimation, and modeling for wireless communications

    CERN Document Server

    Yin, Xuefeng

    2016-01-01

    Thoroughly covering channel characteristics and parameters, this book provides the knowledge needed to design various wireless systems, such as cellular communication systems, RFID and ad hoc wireless communication systems. It gives a detailed introduction to aspects of channels before presenting the novel estimation and modelling techniques which can be used to achieve accurate models. To systematically guide readers through the topic, the book is organised in three distinct parts. The first part covers the fundamentals of the characterization of propagation channels, including the conventional single-input single-output (SISO) propagation channel characterization as well as its extension to multiple-input multiple-output (MIMO) cases. Part two focuses on channel measurements and channel data post-processing. Wideband channel measurements are introduced, including the equipment, technology and advantages and disadvantages of different data acquisition schemes. The channel parameter estimation methods are ...

  9. Modelling, Estimation and Control of Networked Complex Systems

    CERN Document Server

    Chiuso, Alessandro; Frasca, Mattia; Rizzo, Alessandro; Schenato, Luca; Zampieri, Sandro

    2009-01-01

    The paradigm of complexity is pervading both science and engineering, leading to the emergence of novel approaches oriented at the development of a systemic view of the phenomena under study; the definition of powerful tools for modelling, estimation, and control; and the cross-fertilization of different disciplines and approaches. This book is devoted to networked systems which are one of the most promising paradigms of complexity. It is demonstrated that complex, dynamical networks are powerful tools to model, estimate, and control many interesting phenomena, like agent coordination, synchronization, social and economics events, networks of critical infrastructures, resources allocation, information processing, or control over communication networks. Moreover, it is shown how the recent technological advances in wireless communication and decreasing in cost and size of electronic devices are promoting the appearance of large inexpensive interconnected systems, each with computational, sensing and mobile cap...

  10. Estimation Model for Concrete Slump Recovery by Using Superplasticizer

    OpenAIRE

    Chaiyakrit Raoupatham; Ram Hari Dhakal; Chalermchai Wanichlamlert

    2015-01-01

    This paper aimed to introduce the solution of concrete slump recovery using chemical admixture type-F (superplasticizer, naphthalene base) to the practice in order to solve unusable concrete problem due to concrete loss its slump, especially for those tropical countries that have faster slump loss rate. In the other hand, randomly adding superplasticizer into concrete can cause concrete to segregate. Therefore, this paper also develops the estimation model used to calcula...

  11. MATHEMATICAL MODEL FOR ESTIMATION OF MECHANICAL SYSTEM CONDITION IN DYNAMICS

    Directory of Open Access Journals (Sweden)

    D. N. Mironov

    2011-01-01

    Full Text Available The paper considers an estimation of a complicated mechanical system condition in dynamics with due account of material degradation and accumulation of micro-damages. An element of continuous medium has been simulated and described with the help of a discrete element. The paper contains description of a model for determination of mechanical system longevity in accordance with number of cycles and operational period.

  12. Estimation of Continuous Time Models in Economics: an Overview

    OpenAIRE

    Clifford R. Wymer

    2009-01-01

    The dynamics of economic behaviour is often developed in theory as a continuous time system. Rigorous estimation and testing of such systems, and the analysis of some aspects of their properties, is of particular importance in distinguishing between competing hypotheses and the resulting models. The consequences for the international economy during the past eighteen months of failures in the financial sector, and particularly the banking sector, make it essential that the dynamics of financia...

  13. Estimation and Inference for Very Large Linear Mixed Effects Models

    OpenAIRE

    Gao, K.; Owen, A. B.

    2016-01-01

    Linear mixed models with large imbalanced crossed random effects structures pose severe computational problems for maximum likelihood estimation and for Bayesian analysis. The costs can grow as fast as $N^{3/2}$ when there are N observations. Such problems arise in any setting where the underlying factors satisfy a many to many relationship (instead of a nested one) and in electronic commerce applications, the N can be quite large. Methods that do not account for the correlation structure can...

  14. Model parameters estimation and sensitivity by genetic algorithms

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca

    2003-01-01

    In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The

  15. A High Effective Fuzzy Synthetic Evaluation Multi-model Estimation

    Directory of Open Access Journals (Sweden)

    Yang LIU

    2014-01-01

    Full Text Available In view of the questions that the algorithm flow of variable structure multi-model method (VSMM is too complex and the tracking performance is inefficient and therefore it is so difficult to apply VSMM into installing equipment. The paper presents a high-performance variable structure multi-model method basing on multi-factor fuzzy synthetic evaluation (HEFS_VSMM. Under the guidance of variable structure method, HEFS_VSMM uses the technique of multi-factor fuzzy synthetic evaluation in the strategy of model set adaptive to select the appropriate model set in real time and reduce the computation complexity of the model evaluation, firstly. Secondly, select the model set center according to the evaluation results of each model and set the property value for current model set. Thirdly, choose different processes basing on the current model set property value to simplify the logical complexity of the algorithm. At last, the algorithm gets the total estimation by the theories of optimal information fusion on the above-mentioned processing results. The results of simulation show that, compared with the FSMM and EMA, the mean of estimation error belonging to position, velocity and acceleration in the HEFS_VSMM is improved from -0.029 (m, -0.350 (m/s, -10.051(m/s2 to -0.023 (m, 0.052 (m/s, -5.531 (m/s2. The algorithm cycle is reduced from 0.0051(s to 0.0025 (s.

  16. Estimating radiation and temperature data for crop simulation model

    International Nuclear Information System (INIS)

    Ferrer, A.B.; Centeno, H.G.S.; Sheehy, J.E.

    1996-01-01

    Weather (radiation and temperature) and crop characteristics determine the potential production of an irrigated rice crop. Daily weather data are important inputs to ORYZA 1, an eco-physiological crop model. However, in most cases, missing values occur and sometimes daily weather data are not readily available. More than 20 years of historic daily weather data had been collected from six stations in the Philippines -- Albay, Butuan, Munoz, Batac, Aborlan, and Los Banos. Methods to estimate daily weather data values were made by deriving long-term monthly means and (1) using the same value per month, (2) linearly interpolating between months, and (3) using SIMMETEO weather generator. A validated ORYZA 1 was run using actual daily weather data. The model was run again using weather data obtained from each estimation procedure and the predicted yields from the different simulation runs were compared. The yield predicted using the different weather data sets for each site difference by as much as 20 percent. Among the three estimation procedures used, the interpolated monthly mean values of weather data gave results comparable with those of model runs using actual weather data

  17. Estimating the Multilevel Rasch Model: With the lme4 Package

    Directory of Open Access Journals (Sweden)

    Harold Doran

    2007-02-01

    Full Text Available Traditional Rasch estimation of the item and student parameters via marginal maximum likelihood, joint maximum likelihood or conditional maximum likelihood, assume individuals in clustered settings are uncorrelated and items within a test that share a grouping structure are also uncorrelated. These assumptions are often violated, particularly in educational testing situations, in which students are grouped into classrooms and many test items share a common grouping structure, such as a content strand or a reading passage. Consequently, one possible approach is to explicitly recognize the clustered nature of the data and directly incorporate random effects to account for the various dependencies. This article demonstrates how the multilevel Rasch model can be estimated using the functions in R for mixed-effects models with crossed or partially crossed random effects. We demonstrate how to model the following hierarchical data structures: a individuals clustered in similar settings (e.g., classrooms, schools, b items nested within a particular group (such as a content strand or a reading passage, and c how to estimate a teacher × content strand interaction.

  18. [Foundation of preoperative prognosis estimation model for glioblastoma multiforme].

    Science.gov (United States)

    Jiang, H H; Feng, G Y; Liu, D; Ren, X H; Cui, Y; Lin, S

    2017-08-15

    Objective: This study explored the preoperative prognostic factors of patients with glioblastoma multiforme (GBM) in order to propose a preoperative prognosis estimation model. Methods: The clinical data of 416 patients diagnosed with GBM in Beijing Tiantan Hospital affiliated to Capital Medical University from 2008 to 2015 were retrospectively reviewed.A total of nine factors: gender, age, duration of symptoms, preoperative epilepsy, preoperative muscle weakness, preoperative headache, preoperative KPS score, tumor location and tumor diameter were enrolled in the survival analysis.The significant factors identified by Kaplan-Meier plot were further collected in the multivariate Cox regression analysis.On the basis of multivariate analysis results, a preoperative prognosis estimation model was founded. Results: Univariate analysis showed that Age ≥50 years, without preoperative epilepsy, tumor located in non-frontotemporal lobe, tumor diameter ≥6 cm and preoperative KPS score preoperative epilepsy, tumor located in non-frontotemporal lobe were independent risk factors ( P <0.05). The prognostic estimation model based on the independent risk factors divided the whole cohort into three subgroups with different survival ( P <0.001). Conclusions: The more risk factors, the higher score but poorer prognosis. Patients in the high-risk group had lower gross total resection degree but higher rate of postoperative complications, which suggested that aggressive resection was not suitable for high-risk patients.

  19. In-phase and quadrature imbalance modeling, estimation, and compensation

    CERN Document Server

    Li, Yabo

    2013-01-01

    This book provides a unified IQ imbalance model and systematically reviews the existing estimation and compensation schemes. It covers the different assumptions and approaches that lead to many models of IQ imbalance. In wireless communication systems, the In-phase and Quadrature (IQ) modulator and demodulator are usually used as transmitter (TX) and receiver (RX), respectively. For Digital-to-Analog Converter (DAC) and Analog-to-Digital Converter (ADC) limited systems, such as multi-giga-hertz bandwidth millimeter-wave systems, using analog modulator and demodulator is still a low power and l

  20. Effects of lidar pulse density and sample size on a model-assisted approach to estimate forest inventory variables

    Science.gov (United States)

    Jacob Strunk; Hailemariam Temesgen; Hans-Erik Andersen; James P. Flewelling; Lisa Madsen

    2012-01-01

    Using lidar in an area-based model-assisted approach to forest inventory has the potential to increase estimation precision for some forest inventory variables. This study documents the bias and precision of a model-assisted (regression estimation) approach to forest inventory with lidar-derived auxiliary variables relative to lidar pulse density and the number of...

  1. A new method to estimate parameters of linear compartmental models using artificial neural networks

    International Nuclear Information System (INIS)

    Gambhir, Sanjiv S.; Keppenne, Christian L.; Phelps, Michael E.; Banerjee, Pranab K.

    1998-01-01

    At present, the preferred tool for parameter estimation in compartmental analysis is an iterative procedure; weighted nonlinear regression. For a large number of applications, observed data can be fitted to sums of exponentials whose parameters are directly related to the rate constants/coefficients of the compartmental models. Since weighted nonlinear regression often has to be repeated for many different data sets, the process of fitting data from compartmental systems can be very time consuming. Furthermore the minimization routine often converges to a local (as opposed to global) minimum. In this paper, we examine the possibility of using artificial neural networks instead of weighted nonlinear regression in order to estimate model parameters. We train simple feed-forward neural networks to produce as outputs the parameter values of a given model when kinetic data are fed to the networks' input layer. The artificial neural networks produce unbiased estimates and are orders of magnitude faster than regression algorithms. At noise levels typical of many real applications, the neural networks are found to produce lower variance estimates than weighted nonlinear regression in the estimation of parameters from mono- and biexponential models. These results are primarily due to the inability of weighted nonlinear regression to converge. These results establish that artificial neural networks are powerful tools for estimating parameters for simple compartmental models. (author)

  2. Approximate relative fatigue life estimation methods for thin-walled monolithic ceramic crowns.

    Science.gov (United States)

    Nasrin, Sadia; Katsube, Noriko; Seghi, Robert R; Rokhlin, Stanislav I

    2018-02-02

    The objective is to establish an approximate relative fatigue life estimation method under simulated mastication load for thin-walled monolithic restorations. Experimentally measured fatigue parameters of fluormica, leucite, lithium disilicate and yttrium-stabilized zirconia in the existing literature were expressed in terms of the maximum cyclic stress and stress corresponding to initial crack size prior to N number of loading cycles to assess their differences. Assuming that failures mostly originate from high stress region, an approximate restoration life method was explored by ignoring the multi-axial nature of stress state. Experiments utilizing a simple trilayer restoration model with ceramic LD were performed to test the model validity. Ceramic fatigue was found to be similar for clinically relevant loading range and mastication frequency, resulting in the development of an approximate fatigue equation that is universally applicable to a wide range of dental ceramic materials. The equation was incorporated into the approximate restoration life estimation, leading to a simple expression in terms of fast fracture parameters, high stress area ΔA, the high stress averaged over ΔA and N. The developed method was preliminarily verified by the experiments. The impact of fast fracture parameters on the restoration life was separated from other factors, and the importance of surface preparation was manifested in the simplified equation. Both the maximum stress and the area of high stress region were also shown to play critical roles. While nothing can replace actual clinical studies, this method could provide a reasonable preliminary estimation of relative restoration life. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  3. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    NARCIS (Netherlands)

    Locatelli, R.; Bousquet, P.; Chevallier, F.; Fortems-Cheney, A.; Szopa, S.; Saunois, M.; Agusti-Panareda, A.; Bergmann, D.; Bian, H.; Cameron-Smith, P.; Chipperfield, M.P.; Gloor, E.; Houweling, S.; Kawa, S.R.; Krol, M.C.; Patra, P.K.; Prinn, R.G.; Rigby, M.; Saito, R.; Wilson, C.

    2013-01-01

    A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise,

  4. Estimating, Testing, and Comparing Specific Effects in Structural Equation Models: The Phantom Model Approach

    Science.gov (United States)

    Macho, Siegfried; Ledermann, Thomas

    2011-01-01

    The phantom model approach for estimating, testing, and comparing specific effects within structural equation models (SEMs) is presented. The rationale underlying this novel method consists in representing the specific effect to be assessed as a total effect within a separate latent variable model, the phantom model that is added to the main…

  5. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

    Directory of Open Access Journals (Sweden)

    Guangjie Li

    2015-07-01

    Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.

  6. Stochastic linear hybrid systems: Modeling, estimation, and application

    Science.gov (United States)

    Seah, Chze Eng

    Hybrid systems are dynamical systems which have interacting continuous state and discrete state (or mode). Accurate modeling and state estimation of hybrid systems are important in many applications. We propose a hybrid system model, known as the Stochastic Linear Hybrid System (SLHS), to describe hybrid systems with stochastic linear system dynamics in each mode and stochastic continuous-state-dependent mode transitions. We then develop a hybrid estimation algorithm, called the State-Dependent-Transition Hybrid Estimation (SDTHE) algorithm, to estimate the continuous state and discrete state of the SLHS from noisy measurements. It is shown that the SDTHE algorithm is more accurate or more computationally efficient than existing hybrid estimation algorithms. Next, we develop a performance analysis algorithm to evaluate the performance of the SDTHE algorithm in a given operating scenario. We also investigate sufficient conditions for the stability of the SDTHE algorithm. The proposed SLHS model and SDTHE algorithm are illustrated to be useful in several applications. In Air Traffic Control (ATC), to facilitate implementations of new efficient operational concepts, accurate modeling and estimation of aircraft trajectories are needed. In ATC, an aircraft's trajectory can be divided into a number of flight modes. Furthermore, as the aircraft is required to follow a given flight plan or clearance, its flight mode transitions are dependent of its continuous state. However, the flight mode transitions are also stochastic due to navigation uncertainties or unknown pilot intents. Thus, we develop an aircraft dynamics model in ATC based on the SLHS. The SDTHE algorithm is then used in aircraft tracking applications to estimate the positions/velocities of aircraft and their flight modes accurately. Next, we develop an aircraft conformance monitoring algorithm to detect any deviations of aircraft trajectories in ATC that might compromise safety. In this application, the SLHS

  7. Lagrangian speckle model and tissue-motion estimation--theory.

    Science.gov (United States)

    Maurice, R L; Bertrand, M

    1999-07-01

    It is known that when a tissue is subjected to movements such as rotation, shearing, scaling, etc., changes in speckle patterns that result act as a noise source, often responsible for most of the displacement-estimate variance. From a modeling point of view, these changes can be thought of as resulting from two mechanisms: one is the motion of the speckles and the other, the alterations of their morphology. In this paper, we propose a new tissue-motion estimator to counteract these speckle decorrelation effects. The estimator is based on a Lagrangian description of the speckle motion. This description allows us to follow local characteristics of the speckle field as if they were a material property. This method leads to an analytical description of the decorrelation in a way which enables the derivation of an appropriate inverse filter for speckle restoration. The filter is appropriate for linear geometrical transformation of the scattering function (LT), i.e., a constant-strain region of interest (ROI). As the LT itself is a parameter of the filter, a tissue-motion estimator can be formulated as a nonlinear minimization problem, seeking the best match between the pre-tissue-motion image and a restored-speckle post-motion image. The method is tested, using simulated radio-frequency (RF) images of tissue undergoing axial shear.

  8. Estimating regional methane surface fluxes: the relative importance of surface and GOSAT mole fraction measurements

    Directory of Open Access Journals (Sweden)

    A. Fraser

    2013-06-01

    Full Text Available We use an ensemble Kalman filter (EnKF, together with the GEOS-Chem chemistry transport model, to estimate regional monthly methane (CH4 fluxes for the period June 2009–December 2010 using proxy dry-air column-averaged mole fractions of methane (XCH4 from GOSAT (Greenhouse gases Observing SATellite and/or NOAA ESRL (Earth System Research Laboratory and CSIRO GASLAB (Global Atmospheric Sampling Laboratory CH4 surface mole fraction measurements. Global posterior estimates using GOSAT and/or surface measurements are between 510–516 Tg yr−1, which is less than, though within the uncertainty of, the prior global flux of 529 ± 25 Tg yr−1. We find larger differences between regional prior and posterior fluxes, with the largest changes in monthly emissions (75 Tg yr−1 occurring in Temperate Eurasia. In non-boreal regions the error reductions for inversions using the GOSAT data are at least three times larger (up to 45% than if only surface data are assimilated, a reflection of the greater spatial coverage of GOSAT, with the two exceptions of latitudes >60° associated with a data filter and over Europe where the surface network adequately describes fluxes on our model spatial and temporal grid. We use CarbonTracker and GEOS-Chem XCO2 model output to investigate model error on quantifying proxy GOSAT XCH4 (involving model XCO2 and inferring methane flux estimates from surface mole fraction data and show similar resulting fluxes, with differences reflecting initial differences in the proxy value. Using a series of observing system simulation experiments (OSSEs we characterize the posterior flux error introduced by non-uniform atmospheric sampling by GOSAT. We show that clear-sky measurements can theoretically reproduce fluxes within 10% of true values, with the exception of tropical regions where, due to a large seasonal cycle in the number of measurements because of clouds and aerosols, fluxes are within 15% of true fluxes. We evaluate our

  9. Structure Refinement for Vulnerability Estimation Models using Genetic Algorithm Based Model Generators

    Directory of Open Access Journals (Sweden)

    2009-01-01

    Full Text Available In this paper, a method for model structure refinement is proposed and applied in estimation of cumulative number of vulnerabilities according to time. Security as a quality characteristic is presented and defined. Vulnerabilities are defined and their importance is assessed. Existing models used for number of vulnerabilities estimation are enumerated, inspecting their structure. The principles of genetic model generators are inspected. Model structure refinement is defined in comparison with model refinement and a method for model structure refinement is proposed. A case study shows how the method is applied and the obtained results.

  10. Mass discharge estimation from contaminated sites: Multi-model solutions for assessment of conceptual uncertainty

    Science.gov (United States)

    Thomsen, N. I.; Troldborg, M.; McKnight, U. S.; Binning, P. J.; Bjerg, P. L.

    2012-04-01

    Mass discharge estimates are increasingly being used in the management of contaminated sites. Such estimates have proven useful for supporting decisions related to the prioritization of contaminated sites in a groundwater catchment. Potential management options can be categorised as follows: (1) leave as is, (2) clean up, or (3) further investigation needed. However, mass discharge estimates are often very uncertain, which may hamper the management decisions. If option 1 is incorrectly chosen soil and water quality will decrease, threatening or destroying drinking water resources. The risk of choosing option 2 is to spend money on remediating a site that does not pose a problem. Choosing option 3 will often be safest, but may not be the optimal economic solution. Quantification of the uncertainty in mass discharge estimates can therefore greatly improve the foundation for selecting the appropriate management option. The uncertainty of mass discharge estimates depends greatly on the extent of the site characterization. A good approach for uncertainty estimation will be flexible with respect to the investigation level, and account for both parameter and conceptual model uncertainty. We propose a method for quantifying the uncertainty of dynamic mass discharge estimates from contaminant point sources on the local scale. The method considers both parameter and conceptual uncertainty through a multi-model approach. The multi-model approach evaluates multiple conceptual models for the same site. The different conceptual models consider different source characterizations and hydrogeological descriptions. The idea is to include a set of essentially different conceptual models where each model is believed to be realistic representation of the given site, based on the current level of information. Parameter uncertainty is quantified using Monte Carlo simulations. For each conceptual model we calculate a transient mass discharge estimate with uncertainty bounds resulting from

  11. Model uncertainty of various settlement estimation methods in shallow tunnels excavation; case study: Qom subway tunnel

    Science.gov (United States)

    Khademian, Amir; Abdollahipour, Hamed; Bagherpour, Raheb; Faramarzi, Lohrasb

    2017-10-01

    In addition to the numerous planning and executive challenges, underground excavation in urban areas is always followed by certain destructive effects especially on the ground surface; ground settlement is the most important of these effects for which estimation there exist different empirical, analytical and numerical methods. Since geotechnical models are associated with considerable model uncertainty, this study characterized the model uncertainty of settlement estimation models through a systematic comparison between model predictions and past performance data derived from instrumentation. To do so, the amount of surface settlement induced by excavation of the Qom subway tunnel was estimated via empirical (Peck), analytical (Loganathan and Poulos) and numerical (FDM) methods; the resulting maximum settlement value of each model were 1.86, 2.02 and 1.52 cm, respectively. The comparison of these predicted amounts with the actual data from instrumentation was employed to specify the uncertainty of each model. The numerical model outcomes, with a relative error of 3.8%, best matched the reality and the analytical method, with a relative error of 27.8%, yielded the highest level of model uncertainty.

  12. Evapotranspiration Estimates for a Stochastic Soil-Moisture Model

    Science.gov (United States)

    Chaleeraktrakoon, Chavalit; Somsakun, Somrit

    2009-03-01

    Potential evapotranspiration is information that is necessary for applying a widely used stochastic model of soil moisture (I. Rodriguez Iturbe, A. Porporato, L. Ridolfi, V. Isham and D. R. Cox, Probabilistic modelling of water balance at a point: The role of climate, soil and vegetation, Proc. Roy. Soc. London A455 (1999) 3789-3805). An objective of the present paper is thus to find a proper estimate of the evapotranspiration for the stochastic model. This estimate is obtained by comparing the calculated soil-moisture distribution resulting from various techniques, such as Thornthwaite, Makkink, Jensen-Haise, FAO Modified Penman, and Blaney-Criddle, with an observed one. The comparison results using five sequences of daily soil-moisture for a dry season from November 2003 to April 2004 (Udornthani Province, Thailand) have indicated that all methods can be used if the weather information required is available. This is because their soil-moisture distributions are alike. In addition, the model is shown to have its ability in approximately describing the phenomenon at a weekly or biweekly time scale which is desirable for agricultural engineering applications.

  13. Rainfall estimation with TFR model using Ensemble Kalman filter

    Science.gov (United States)

    Asyiqotur Rohmah, Nabila; Apriliani, Erna

    2018-03-01

    Rainfall fluctuation can affect condition of other environment, correlated with economic activity and public health. The increasing of global average temperature is influenced by the increasing of CO2 in the atmosphere, which caused climate change. Meanwhile, the forests as carbon sinks that help keep the carbon cycle and climate change mitigation. Climate change caused by rainfall intensity deviations can affect the economy of a region, and even countries. It encourages research on rainfall associated with an area of forest. In this study, the mathematics model that used is a model which describes the global temperatures, forest cover, and seasonal rainfall called the TFR (temperature, forest cover, and rainfall) model. The model will be discretized first, and then it will be estimated by the method of Ensemble Kalman Filter (EnKF). The result shows that the more ensembles used in estimation, the better the result is. Also, the accurateness of simulation result is influenced by measurement variable. If a variable is measurement data, the result of simulation is better.

  14. The complex model of risk and progression of AMD estimation

    Directory of Open Access Journals (Sweden)

    V. S. Akopyan

    2012-01-01

    Full Text Available Purpose: to develop a method and a statistical model to estimate individual risk of AMD and the risk for progression to advanced AMD using clinical and genetic risk factors.Methods: A statistical risk assessment model was developed using stepwise binary logistic regression analysis. to estimate the population differences in the prevalence of allelic variants of genes and for the development of models adapted to the population of Moscow region genotyping and assessment of the influence of other risk factors was performed in two groups: patients with differ- ent stages of AMD (n = 74, and control group (n = 116. Genetic risk factors included in the study: polymorphisms in the complement system genes (C3 and CFH, genes at 10q26 locus (ARMS2 and HtRA1, polymorphism in the mitochondrial gene Mt-ND2. Clinical risk factors included in the study: age, gender, high body mass index, smoking history.Results: A comprehensive analysis of genetic and clinical risk factors for AMD in the study group was performed. Compiled statis- tical model assessment of individual risk of AMD, the sensitivity of the model — 66.7%, specificity — 78.5%, AUC = 0.76. Risk factors of late AMD, compiled a statistical model describing the probability of late AMD, the sensitivity of the model — 66.7%, specificity — 78.3%, AUC = 0.73. the developed system allows determining the most likely version of the current late AMD: dry or wet.Conclusion: the developed test system and the mathematical algorhythm for determining the risk of AMD, risk of progression to advanced AMD have fair diagnostic informative and promising for use in clinical practice.

  15. Estimating a DIF decomposition model using a random-weights linear logistic test model approach.

    Science.gov (United States)

    Paek, Insu; Fukuhara, Hirotaka

    2015-09-01

    A differential item functioning (DIF) decomposition model separates a testlet item DIF into two sources: item-specific differential functioning and testlet-specific differential functioning. This article provides an alternative model-building framework and estimation approach for a DIF decomposition model that was proposed by Beretvas and Walker (2012). Although their model is formulated under multilevel modeling with the restricted pseudolikelihood estimation method, our approach illustrates DIF decomposition modeling that is directly built upon the random-weights linear logistic test model framework with the marginal maximum likelihood estimation method. In addition to demonstrating our approach's performance, we provide detailed information on how to implement this new DIF decomposition model using an item response theory software program; using DIF decomposition may be challenging for practitioners, yet practical information on how to implement it has previously been unavailable in the measurement literature.

  16. Comparison of four different energy balance models for estimating evapotranspiration in the Midwestern United States

    Science.gov (United States)

    Singh, Ramesh K.; Senay, Gabriel B.

    2016-01-01

    The development of different energy balance models has allowed users to choose a model based on its suitability in a region. We compared four commonly used models—Mapping EvapoTranspiration at high Resolution with Internalized Calibration (METRIC) model, Surface Energy Balance Algorithm for Land (SEBAL) model, Surface Energy Balance System (SEBS) model, and the Operational Simplified Surface Energy Balance (SSEBop) model—using Landsat images to estimate evapotranspiration (ET) in the Midwestern United States. Our models validation using three AmeriFlux cropland sites at Mead, Nebraska, showed that all four models captured the spatial and temporal variation of ET reasonably well with an R2 of more than 0.81. Both the METRIC and SSEBop models showed a low root mean square error (0.80), whereas the SEBAL and SEBS models resulted in relatively higher bias for estimating daily ET. The empirical equation of daily average net radiation used in the SEBAL and SEBS models for upscaling instantaneous ET to daily ET resulted in underestimation of daily ET, particularly when the daily average net radiation was more than 100 W·m−2. Estimated daily ET for both cropland and grassland had some degree of linearity with METRIC, SEBAL, and SEBS, but linearity was stronger for evaporative fraction. Thus, these ET models have strengths and limitations for applications in water resource management.

  17. Relative contributions of sampling effort, measuring, and weighing to precision of larval sea lamprey biomass estimates

    Science.gov (United States)

    Slade, Jeffrey W.; Adams, Jean V.; Cuddy, Douglas W.; Neave, Fraser B.; Sullivan, W. Paul; Young, Robert J.; Fodale, Michael F.; Jones, Michael L.

    2003-01-01

    We developed two weight-length models from 231 populations of larval sea lampreys (Petromyzon marinus) collected from tributaries of the Great Lakes: Lake Ontario (21), Lake Erie (6), Lake Huron (67), Lake Michigan (76), and Lake Superior (61). Both models were mixed models, which used population as a random effect and additional environmental factors as fixed effects. We resampled weights and lengths 1,000 times from data collected in each of 14 other populations not used to develop the models, obtaining a weight and length distribution from reach resampling. To test model performance, we applied the two weight-length models to the resampled length distributions and calculated the predicted mean weights. We also calculated the observed mean weight for each resampling and for each of the original 14 data sets. When the average of predicted means was compared to means from the original data in each stream, inclusion of environmental factors did not consistently improve the performance of the weight-length model. We estimated the variance associated with measures of abundance and mean weight for each of the 14 selected populations and determined that a conservative estimate of the proportional contribution to variance associated with estimating abundance accounted for 32% to 95% of the variance (mean = 66%). Variability in the biomass estimate appears more affected by variability in estimating abundance than in converting length to weight. Hence, efforts to improve the precision of biomass estimates would be aided most by reducing the variability associated with estimating abundance.

  18. Computer model for estimating electric utility environmental noise

    International Nuclear Information System (INIS)

    Teplitzky, A.M.; Hahn, K.J.

    1991-01-01

    This paper reports on a computer code for estimating environmental noise emissions from the operation and the construction of electric power plants that was developed based on algorithms. The computer code (Model) is used to predict octave band sound power levels for power plant operation and construction activities on the basis of the equipment operating characteristics and calculates off-site sound levels for each noise source and for an entire plant. Estimated noise levels are presented either as A-weighted sound level contours around the power plant or as octave band levels at user defined receptor locations. Calculated sound levels can be compared with user designated noise criteria, and the program can assist the user in analyzing alternative noise control strategies

  19. Dynamic plant uptake modelling and mass flux estimation

    DEFF Research Database (Denmark)

    Rein, Arno; Bauer-Gottwein, Peter; Trapp, Stefan

    2011-01-01

    Plants significantly influence contaminant transport and fate. Important processes are uptake of soil and groundwater contaminants, as well as biodegradation in plants and their root zones. Models for the prediction of chemical uptake into plants are required for the set-up of mass balances...... in environmental systems at different scales. Feedback mechanisms between plants and hydrological systems can play an important role. However, they have received little attention to date. Here, a new model concept for dynamic plant uptake models applying analytical matrix solutions is presented, which can...... be coupled to groundwater transport simulation tools. Exemplary simulations of plant uptake were carried out in order to estimate chemical concentrations in the soil–plant–air system and the influence of plants on contaminant mass fluxes from soil to groundwater....

  20. Use of econometric models to estimate expenditure shares.

    Science.gov (United States)

    Trogdon, Justin G; Finkelstein, Eric A; Hoerger, Thomas J

    2008-08-01

    To investigate the use of regression models to calculate disease-specific shares of medical expenditures. Medical Expenditure Panel Survey (MEPS), 2000-2003. Theoretical investigation and secondary data analysis. Condition files used to define the presence of 10 medical conditions. Incremental effects of conditions on expenditures, expressed as a fraction of total expenditures, cannot generally be interpreted as shares. When the presence of one condition increases treatment costs for another condition, summing condition-specific shares leads to double-counting of expenditures. Condition-specific shares generated from multiplicative models should not be summed. We provide an algorithm that allows estimates based on these models to be interpreted as shares and summed across conditions.

  1. Dynamic systems models new methods of parameter and state estimation

    CERN Document Server

    2016-01-01

    This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...

  2. Estimating Parameters in Physical Models through Bayesian Inversion: A Complete Example

    KAUST Repository

    Allmaras, Moritz

    2013-02-07

    All mathematical models of real-world phenomena contain parameters that need to be estimated from measurements, either for realistic predictions or simply to understand the characteristics of the model. Bayesian statistics provides a framework for parameter estimation in which uncertainties about models and measurements are translated into uncertainties in estimates of parameters. This paper provides a simple, step-by-step example-starting from a physical experiment and going through all of the mathematics-to explain the use of Bayesian techniques for estimating the coefficients of gravity and air friction in the equations describing a falling body. In the experiment we dropped an object from a known height and recorded the free fall using a video camera. The video recording was analyzed frame by frame to obtain the distance the body had fallen as a function of time, including measures of uncertainty in our data that we describe as probability densities. We explain the decisions behind the various choices of probability distributions and relate them to observed phenomena. Our measured data are then combined with a mathematical model of a falling body to obtain probability densities on the space of parameters we seek to estimate. We interpret these results and discuss sources of errors in our estimation procedure. © 2013 Society for Industrial and Applied Mathematics.

  3. A guide for estimating dynamic panel models: the macroeconomics models specifiness

    International Nuclear Information System (INIS)

    Coletta, Gaetano

    2005-10-01

    The aim of this paper is to review estimators for dynamic panel data models, a topic in which the interest has grown recently. As a consequence 01 this late interest, different estimation techniques have been proposed in the last few years and, given the last development of the subject, there is still a lack 01 a comprehensive guide for panel data applications, and for macroeconomics panel data models in particular. Finally, we also provide some indications about the Stata software commands to estimate dynamic panel data models with the techniques illustrated in the paper [it

  4. Calculation of prevalence estimates through differential equations: application to stroke-related disability.

    Science.gov (United States)

    Mar, Javier; Sainz-Ezkerra, María; Moler-Cuiral, Jose Antonio

    2008-01-01

    Neurological diseases now make up 6.3% of the global burden of disease mainly because they cause disability. To assess disability, prevalence estimates are needed. The objective of this study is to apply a method based on differential equations to calculate the prevalence of stroke-related disability. On the basis of a flow diagram, a set of differential equations for each age group was constructed. The linear system was solved analytically and numerically. The parameters of the system were obtained from the literature. The model was validated and calibrated by comparison with previous results. The stroke prevalence rate per 100,000 men was 828, and the rate for stroke-related disability was 331. The rates steadily rose with age, but the group between the ages of 65 and 74 years had the highest total number of individuals. Differential equations are useful to represent the natural history of neurological diseases and to make possible the calculation of the prevalence for the various states of disability. In our experience, when compared with the results obtained by Markov models, the benefit of the continuous use of time outweighs the mathematical requirements of our model. (c) 2008 S. Karger AG, Basel.

  5. A data assimilating model for estimating Southern Ocean biogeochemistry

    Science.gov (United States)

    Verdy, A.; Mazloff, M. R.

    2017-09-01

    A Biogeochemical Southern Ocean State Estimate (B-SOSE) is introduced that includes carbon and oxygen fields as well as nutrient cycles. The state estimate is constrained with observations while maintaining closed budgets and obeying dynamical and thermodynamic balances. Observations from profiling floats, shipboard data, underway measurements, and satellites are used for assimilation. The years 2008-2012 are chosen due to the relative abundance of oxygen observations from Argo floats during this time. The skill of the state estimate at fitting the data is assessed. The agreement is best for fields that are constrained with the most observations, such as surface pCO2 in Drake Passage (44% of the variance captured) and oxygen profiles (over 60% of the variance captured at 200 and 1000 m). The validity of adjoint method optimization for coupled physical-biogeochemical state estimation is demonstrated with a series of gradient check experiments. The method is shown to be mature and ready to synthesize in situ biogeochemical observations as they become more available. Documenting the B-SOSE configuration and diagnosing the strengths and weaknesses of the solution informs usage of this product as both a climate baseline and as a way to test hypotheses. Transport of Intermediate Waters across 32°S supplies significant amounts of nitrate to the Atlantic Ocean (5.57 ± 2.94 Tmol yr-1) and Indian Ocean (5.09 ± 3.06 Tmol yr-1), but much less nitrate reaches the Pacific Ocean (1.78 ± 1.91 Tmol yr-1). Estimates of air-sea carbon dioxide fluxes south of 50°S suggest a mean uptake of 0.18 Pg C/yr for the time period analyzed.

  6. Bottom-up modeling approach for the quantitative estimation of parameters in pathogen-host interactions

    Directory of Open Access Journals (Sweden)

    Teresa eLehnert

    2015-06-01

    Full Text Available Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM, because this level of model complexity allows estimating textit{a priori} unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e. least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment.

  7. Modelling bulk canopy resistance from climatic variables for evapotranspiration estimation

    Science.gov (United States)

    Perez, P. J.; Martinez-Cob, A.; Lecina, S.; Castellvi, F.; Villalobos, F. J.

    2003-04-01

    Evapotranspiration is a component of the hydrological cycle whose accurate computation is needed for an adequate management of water resources. In particular, a high level of accuracy in crop evapotranspiration estimation can represent an important saving of economical and water resources at planning and management of irrigated areas. In the evapotranspiration process, bulk canopy resistance (r_c) is a primary factor and its correct modelling remains an important problem in the Penman-Monteith (PM) method, not only for tall crops but also for medium height and short crops under water stress. In this work, an alternative approach for modelling canopy resistance is presented against th PM method with constant canopy resistance. Variable r_c values are computed as function of a climatic resistance and compared with other two models, Katerji and Perrier and Todorovic. Hourly evapotranspiration values (ET_o) over grass were obtained with a weighing lysimeter and an eddy covariance system at the Ebro and Guadalquivir valleys (Spain) respectively. The main objective is to evaluate whether the use of variable rather than fixed r_c values, would improve the ET_o estimates obtained by applying the PM equation under the semiarid conditions of the two sites, where evaporative demand is high particularly during summer.

  8. Bayesian analysis of inflation: Parameter estimation for single field models

    International Nuclear Information System (INIS)

    Mortonson, Michael J.; Peiris, Hiranya V.; Easther, Richard

    2011-01-01

    Future astrophysical data sets promise to strengthen constraints on models of inflation, and extracting these constraints requires methods and tools commensurate with the quality of the data. In this paper we describe ModeCode, a new, publicly available code that computes the primordial scalar and tensor power spectra for single-field inflationary models. ModeCode solves the inflationary mode equations numerically, avoiding the slow roll approximation. It is interfaced with CAMB and CosmoMC to compute cosmic microwave background angular power spectra and perform likelihood analysis and parameter estimation. ModeCode is easily extendable to additional models of inflation, and future updates will include Bayesian model comparison. Errors from ModeCode contribute negligibly to the error budget for analyses of data from Planck or other next generation experiments. We constrain representative single-field models (φ n with n=2/3, 1, 2, and 4, natural inflation, and 'hilltop' inflation) using current data, and provide forecasts for Planck. From current data, we obtain weak but nontrivial limits on the post-inflationary physics, which is a significant source of uncertainty in the predictions of inflationary models, while we find that Planck will dramatically improve these constraints. In particular, Planck will link the inflationary dynamics with the post-inflationary growth of the horizon, and thus begin to probe the ''primordial dark ages'' between TeV and grand unified theory scale energies.

  9. Estimating Net Primary Production of Swedish Forest Landscapes by Combining Mechanistic Modeling and Remote Sensing

    DEFF Research Database (Denmark)

    Tagesson, Håkan Torbern; Smith, Benjamin; Løfgren, Anders

    2009-01-01

    and the Beer-Lambert law. LAI estimates were compared with satellite-extrapolated field estimates of LAI, and the results were generally acceptable. NPP estimates directly from the dynamic vegetation model and estimates obtained by combining the model estimates with remote sensing information were, on average...

  10. Relating structure and dynamics in organisation models

    NARCIS (Netherlands)

    Jonkers, C.M.; Treur, J.

    2002-01-01

    To understand how an organisational structure relates to dynamics is an interesting fundamental challenge in the area of social modelling. Specifications of organisational structure usually have a diagrammatic form that abstracts from more detailed dynamics. Dynamic properties of agent systems,

  11. Micro, nanosystems and systems on chips modeling, control, and estimation

    CERN Document Server

    Voda, Alina

    2013-01-01

    Micro and nanosystems represent a major scientific and technological challenge, with actual and potential applications in almost all fields of the human activity. The aim of the present book is to present how concepts from dynamical control systems (modeling, estimation, observation, identification, feedback control) can be adapted and applied to the development of original very small-scale systems and of their human interfaces. The application fields presented here come from micro and nanorobotics, biochips, near-field microscopy (AFM and STM) and nanosystems networks. Alina Voda has drawn co

  12. On the Relationships between Jeffreys Modal and Weighted Likelihood Estimation of Ability under Logistic IRT Models

    Science.gov (United States)

    Magis, David; Raiche, Gilles

    2012-01-01

    This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…

  13. Solvation model for estimating the properties of (vapour + liquid) equilibrium

    International Nuclear Information System (INIS)

    Senol, Aynur

    2008-01-01

    A solvation energy relation (SERAS) has been developed for correlating the properties and (vapour + liquid) equilibrium (VLE) of associated systems capable of hydrogen bonding or dipole-dipole interaction. The model clarifies the simultaneous impact of hydrogen bonding, solubility and thermodynamic factors of activity coefficients derived from the UNIFAC-original model. The consistency test has been processed against binary VLE data for six isobaric systems of hydrogen bonding (I to III) and dipole-dipole interaction (IV to VI) types, and two isothermal systems of both types (VII and VIII). Systems II, III, and VIII show negative non-ideal deviations. The reliability analysis has been conducted on the performance of the SERAS model with 5- and 10-parameters. The model matches relatively well with the observed performance, yielding mean error of 9.7% for all the systems and properties considered

  14. Solvation model for estimating the properties of (vapour + liquid) equilibrium

    Energy Technology Data Exchange (ETDEWEB)

    Senol, Aynur [Istanbul University, Faculty of Engineering, Department of Chemical Engineering, 34320 Avcilar, Istanbul (Turkey)], E-mail: senol@istanbul.ed.tr

    2008-08-15

    A solvation energy relation (SERAS) has been developed for correlating the properties and (vapour + liquid) equilibrium (VLE) of associated systems capable of hydrogen bonding or dipole-dipole interaction. The model clarifies the simultaneous impact of hydrogen bonding, solubility and thermodynamic factors of activity coefficients derived from the UNIFAC-original model. The consistency test has been processed against binary VLE data for six isobaric systems of hydrogen bonding (I to III) and dipole-dipole interaction (IV to VI) types, and two isothermal systems of both types (VII and VIII). Systems II, III, and VIII show negative non-ideal deviations. The reliability analysis has been conducted on the performance of the SERAS model with 5- and 10-parameters. The model matches relatively well with the observed performance, yielding mean error of 9.7% for all the systems and properties considered.

  15. Estimating climate change impact on irrigation demand using integrated modelling

    International Nuclear Information System (INIS)

    Zupanc, Vesna; Pintar, Marina

    2004-01-01

    Water is basic element in agriculture, and along with the soil characteristics, it remains the essential for the growth and evolution of plants. Trends of air temperature and precipitation for Slovenia indicate the increase of the air temperature and reduction of precipitation during the vegetation period, which will have a substantial impact on rural economy in Slovenia. The impact of climate change will be substantial for soil the water balance. Distinctive drought periods in past years had great impact on rural plants in light soils. Climate change will most probably also result in drought in soils which otherwise provide optimal water supply for plants. Water balance in the cross section of the rooting depth is significant for the agriculture. Mathematical models enable smaller amount of measurements in a certain area by means of measurements carried out only in characteristic points serving for verification and calibration of the model. Combination of on site measurements and mathematical modelling proved to be an efficient method for understanding of processes in nature. Climate scenarios made for the estimation of the impact of climate change are based on the general circulation models. A study based on a hundred year set of monthly data showed that in Slovenia temperature would increase at min. by 2.3 o C, and by 5.6 o C at max and by 4.5 o C in average. Valid methodology for the estimate of the impact of climate change applies the model using a basic set of data for a thirty year period (1961-1990) and a changed set of climate input parameters on one hand, and, on the other, a comparison of output results of the model. Estimating climate change impact on irrigation demand for West Slovenia for peaches and nectarines grown on Cambisols and Fluvisols was made using computer model SWAP. SWAP is a precise and power too[ for the estimation of elements of soil water balance at the level of cross section of the monitored and studied profile from the soil surface

  16. Improving the precision of lake ecosystem metabolism estimates by identifying predictors of model uncertainty

    Science.gov (United States)

    Rose, Kevin C.; Winslow, Luke A.; Read, Jordan S.; Read, Emily K.; Solomon, Christopher T.; Adrian, Rita; Hanson, Paul C.

    2014-01-01

    Diel changes in dissolved oxygen are often used to estimate gross primary production (GPP) and ecosystem respiration (ER) in aquatic ecosystems. Despite the widespread use of this approach to understand ecosystem metabolism, we are only beginning to understand the degree and underlying causes of uncertainty for metabolism model parameter estimates. Here, we present a novel approach to improve the precision and accuracy of ecosystem metabolism estimates by identifying physical metrics that indicate when metabolism estimates are highly uncertain. Using datasets from seventeen instrumented GLEON (Global Lake Ecological Observatory Network) lakes, we discovered that many physical characteristics correlated with uncertainty, including PAR (photosynthetically active radiation, 400-700 nm), daily variance in Schmidt stability, and wind speed. Low PAR was a consistent predictor of high variance in GPP model parameters, but also corresponded with low ER model parameter variance. We identified a threshold (30% of clear sky PAR) below which GPP parameter variance increased rapidly and was significantly greater in nearly all lakes compared with variance on days with PAR levels above this threshold. The relationship between daily variance in Schmidt stability and GPP model parameter variance depended on trophic status, whereas daily variance in Schmidt stability was consistently positively related to ER model parameter variance. Wind speeds in the range of ~0.8-3 m s–1 were consistent predictors of high variance for both GPP and ER model parameters, with greater uncertainty in eutrophic lakes. Our findings can be used to reduce ecosystem metabolism model parameter uncertainty and identify potential sources of that uncertainty.

  17. Angular Motion Estimation Using Dynamic Models in a Gyro-Free Inertial Measurement Unit

    Directory of Open Access Journals (Sweden)

    Otmar Loffeld

    2012-04-01

    Full Text Available In this paper, we summarize the results of using dynamic models borrowed from tracking theory in describing the time evolution of the state vector to have an estimate of the angular motion in a gyro-free inertial measurement unit (GF-IMU. The GF-IMU is a special type inertial measurement unit (IMU that uses only a set of accelerometers in inferring the angular motion. Using distributed accelerometers, we get an angular information vector (AIV composed of angular acceleration and quadratic angular velocity terms. We use a Kalman filter approach to estimate the angular velocity vector since it is not expressed explicitly within the AIV. The bias parameters inherent in the accelerometers measurements’ produce a biased AIV and hence the AIV bias parameters are estimated within an augmented state vector. Using dynamic models, the appended bias parameters of the AIV become observable and hence we can have unbiased angular motion estimate. Moreover, a good model is required to extract the maximum amount of information from the observation. Observability analysis is done to determine the conditions for having an observable state space model. For higher grades of accelerometers and under relatively higher sampling frequency, the error of accelerometer measurements is dominated by the noise error. Consequently, simulations are conducted on two models, one has bias parameters appended in the state space model and the other is a reduced model without bias parameters.

  18. Estimation and Model Selection for Finite Mixtures of Latent Interaction Models

    Science.gov (United States)

    Hsu, Jui-Chen

    2011-01-01

    Latent interaction models and mixture models have received considerable attention in social science research recently, but little is known about how to handle if unobserved population heterogeneity exists in the endogenous latent variables of the nonlinear structural equation models. The current study estimates a mixture of latent interaction…

  19. Re-estimating temperature-dependent consumption parameters in bioenergetics models for juvenile Chinook salmon

    Science.gov (United States)

    Plumb, John M.; Moffitt, Christine M.

    2015-01-01

    Researchers have cautioned against the borrowing of consumption and growth parameters from other species and life stages in bioenergetics growth models. In particular, the function that dictates temperature dependence in maximum consumption (Cmax) within the Wisconsin bioenergetics model for Chinook Salmon Oncorhynchus tshawytscha produces estimates that are lower than those measured in published laboratory feeding trials. We used published and unpublished data from laboratory feeding trials with subyearling Chinook Salmon from three stocks (Snake, Nechako, and Big Qualicum rivers) to estimate and adjust the model parameters for temperature dependence in Cmax. The data included growth measures in fish ranging from 1.5 to 7.2 g that were held at temperatures from 14°C to 26°C. Parameters for temperature dependence in Cmax were estimated based on relative differences in food consumption, and bootstrapping techniques were then used to estimate the error about the parameters. We found that at temperatures between 17°C and 25°C, the current parameter values did not match the observed data, indicating that Cmax should be shifted by about 4°C relative to the current implementation under the bioenergetics model. We conclude that the adjusted parameters for Cmax should produce more accurate predictions from the bioenergetics model for subyearling Chinook Salmon.

  20. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.

  1. An improved model for estimating pesticide emissions for agricultural LCA

    DEFF Research Database (Denmark)

    Dijkman, Teunis Johannes; Birkved, Morten; Hauschild, Michael Zwicky

    2011-01-01

    Credible quantification of chemical emissions in the inventory phase of Life Cycle Assessment (LCA) is crucial since chemicals are the dominating cause of the human and ecotoxicity-related environmental impacts in Life Cycle Impact Assessment (LCIA). When applying LCA for assessment of agricultural...... products, off-target pesticide emissions need to be quantified as accurately as possible because of the considerable toxicity effects associated with chemicals designed to have a high impact on biological organisms like for example insects or weed plants. PestLCI was developed to estimate the fractions...

  2. Assessing Interval Estimation Methods for Hill Model Parameters in a High-Throughput Screening Context (IVIVE meeting)

    Science.gov (United States)

    The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maxi...

  3. Estimating effects of rare haplotypes on failure time using a penalized Cox proportional hazards regression model

    Directory of Open Access Journals (Sweden)

    Tanck Michael WT

    2008-01-01

    Full Text Available Abstract Background This paper describes a likelihood approach to model the relation between failure time and haplotypes in studies with unrelated individuals where haplotype phase is unknown, while dealing with the problem of unstable estimates due to rare haplotypes by considering a penalized log-likelihood. Results The Cox model presented here incorporates the uncertainty related to the unknown phase of multiple heterozygous individuals as weights. Estimation is performed with an EM algorithm. In the E-step the weights are estimated, and in the M-step the parameter estimates are estimated by maximizing the expectation of the joint log-likelihood, and the baseline hazard function and haplotype frequencies are calculated. These steps are iterated until the parameter estimates converge. Two penalty functions are considered, namely the ridge penalty and a difference penalty, which is based on the assumption that similar haplotypes show similar effects. Simulations were conducted to investigate properties of the method, and the association between IL10 haplotypes and risk of target vessel revascularization was investigated in 2653 patients from the GENDER study. Conclusion Results from simulations and real data show that the penalized log-likelihood approach produces valid results, indicating that this method is of interest when studying the association between rare haplotypes and failure time in studies of unrelated individuals.

  4. Small Area Model-Based Estimators Using Big Data Sources

    Directory of Open Access Journals (Sweden)

    Marchetti Stefano

    2015-06-01

    Full Text Available The timely, accurate monitoring of social indicators, such as poverty or inequality, on a finegrained spatial and temporal scale is a crucial tool for understanding social phenomena and policymaking, but poses a great challenge to official statistics. This article argues that an interdisciplinary approach, combining the body of statistical research in small area estimation with the body of research in social data mining based on Big Data, can provide novel means to tackle this problem successfully. Big Data derived from the digital crumbs that humans leave behind in their daily activities are in fact providing ever more accurate proxies of social life. Social data mining from these data, coupled with advanced model-based techniques for fine-grained estimates, have the potential to provide a novel microscope through which to view and understand social complexity. This article suggests three ways to use Big Data together with small area estimation techniques, and shows how Big Data has the potential to mirror aspects of well-being and other socioeconomic phenomena.

  5. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python

    Directory of Open Access Journals (Sweden)

    Thomas V Wiecki

    2013-08-01

    Full Text Available The diffusion model is a commonly used tool to infer latent psychological processes underlying decision making, and to link them to neural mechanisms based on reaction times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of reaction time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model, which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject / condition than non-hierarchical method, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g. fMRI influence decision making parameters. This paper will first describe the theoretical background of drift-diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the chi-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs

  6. Models and relations in economics and econometrics

    DEFF Research Database (Denmark)

    Juselius, Katarina

    1999-01-01

    Based on a money market analysis using the cointegrated VAR model the paper demonstrates some possible pitfalls in macroeconomic inference as a direct consequence of inadequate stochastic model formulation. A number of questions related to concepts such as empirical and theoretical steady-states,...

  7. Models and relations in economics and econometrics

    DEFF Research Database (Denmark)

    Juselius, Katarina

    1999-01-01

    Based on a money market analysis using the cointegrated VAR model the paper demonstrates some possible pitfalls in macroeconomic inference as a direct consequence of inadequate stochastic model formulation. A number of questions related to concepts such as empirical and theoretical steady...

  8. Relating business modelling and enterprise architecture

    NARCIS (Netherlands)

    Meertens, Lucas Onno

    2013-01-01

    This thesis proposes a methodology for creating business models, evaluating them, and relating them to enterprise architecture. The methodology consists of several steps, leading from an organization’s current situation to a target situation, via business models and enterprise architecture.

  9. Selecting and applying cesium-137 conversion models to estimate soil erosion rates in cultivated fields.

    Science.gov (United States)

    Li, Sheng; Lobb, David A; Tiessen, Kevin H D; McConkey, Brian G

    2010-01-01

    The fallout radionuclide cesium-137 ((137)Cs) has been successfully used in soil erosion studies worldwide. However, discrepancies often exist between the erosion rates estimated using various conversion models. As a result, there is often confusion in the use of the various models and in the interpretation of the data. Therefore, the objective of this study was to test the structural and parametrical uncertainties associated with four conversion models typically used in cultivated agricultural landscapes. For the structural uncertainties, the Soil Constituent Redistribution by Erosion Model (SCREM) was developed and used to simulate the redistribution of fallout (137)Cs due to tillage and water erosion along a simple two-dimensional (horizontal and vertical) transect. The SCREM-predicted (137)Cs inventories were then imported into the conversion models to estimate the erosion rates. The structural uncertainties of the conversion models were assessed based on the comparisons between the conversion-model-estimated erosion rates and the erosion rates determined or used in the SCREM. For the parametrical uncertainties, test runs were conducted by varying the values of the parameters used in the model, and the parametrical uncertainties were assessed based on the responsive changes of the estimated erosion rates. Our results suggest that: (i) the performance/accuracy of the conversion models was largely dependent on the relative contributions of water vs. tillage erosion; and (ii) the estimated erosion rates were highly sensitive to the input values of the reference (137)Cs level, particle size correction factors and tillage depth. Guidelines were proposed to aid researchers in selecting and applying the conversion models under various situations common to agricultural landscapes.

  10. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    Science.gov (United States)

    Locatelli, R.; Bousquet, P.; Chevallier, F.; Fortems-Cheney, A.; Szopa, S.; Saunois, M.; Agusti-Panareda, A.; Bergmann, D.; Bian, H.; Cameron-Smith, P.; Chipperfield, M. P.; Gloor, E.; Houweling, S.; Kawa, S. R.; Krol, M.; Patra, P. K.; Prinn, R. G.; Rigby, M.; Saito, R.; Wilson, C.

    2013-10-01

    A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System) inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10 synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure) is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr-1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr-1 in North America to 7 Tg yr-1 in Boreal Eurasia (from 23 to 48%, respectively). At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly question the consistency of

  11. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    Directory of Open Access Journals (Sweden)

    R. Locatelli

    2013-10-01

    Full Text Available A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10 synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr−1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr−1 in North America to 7 Tg yr−1 in Boreal Eurasia (from 23 to 48%, respectively. At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly

  12. Research on parafoil stability using a rapid estimate model

    Directory of Open Access Journals (Sweden)

    Hua YANG

    2017-10-01

    Full Text Available With the consideration of rotation between canopy and payload of parafoil system, a four-degree-of-freedom (4-DOF longitudinal static model was used to solve parafoil state variables in straight steady flight. The aerodynamic solution of parafoil system was a combination of vortex lattice method (VLM and engineering estimation method. Based on small disturbance assumption, a 6-DOF linear model that considers canopy additional mass was established with benchmark state calculated by 4-DOF static model. Modal analysis of a dynamic model was used to calculate the stability parameters. This method, which is based on a small disturbance linear model and modal analysis, is high-efficiency to the study of parafoil stability. It is well suited for rapid stability analysis in the preliminary stage of parafoil design. Using this method, this paper shows that longitudinal and lateral stability will both decrease when a steady climbing angle increases. This explains the wavy track of the parafoil observed during climbing.

  13. Comparison of different models for non-invasive FFR estimation

    Science.gov (United States)

    Mirramezani, Mehran; Shadden, Shawn

    2017-11-01

    Coronary artery disease is a leading cause of death worldwide. Fractional flow reserve (FFR), derived from invasively measuring the pressure drop across a stenosis, is considered the gold standard to diagnose disease severity and need for treatment. Non-invasive estimation of FFR has gained recent attention for its potential to reduce patient risk and procedural cost versus invasive FFR measurement. Non-invasive FFR can be obtained by using image-based computational fluid dynamics to simulate blood flow and pressure in a patient-specific coronary model. However, 3D simulations require extensive effort for model construction and numerical computation, which limits their routine use. In this study we compare (ordered by increasing computational cost/complexity): reduced-order algebraic models of pressure drop across a stenosis; 1D, 2D (multiring) and 3D CFD models; as well as 3D FSI for the computation of FFR in idealized and patient-specific stenosis geometries. We demonstrate the ability of an appropriate reduced order algebraic model to closely predict FFR when compared to FFR from a full 3D simulation. This work was supported by the NIH, Grant No. R01-HL103419.

  14. Principles of parametric estimation in modeling language competition.

    Science.gov (United States)

    Zhang, Menghan; Gong, Tao

    2013-06-11

    It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.

  15. Estimating the clinical benefits of vaccinating boys and girls against HPV-related diseases in Europe

    International Nuclear Information System (INIS)

    Marty, Rémi; Roze, Stéphane; Bresse, Xavier; Largeron, Nathalie; Smith-Palmer, Jayne

    2013-01-01

    HPV is related to a number of cancer types, causing a considerable burden in both genders in Europe. Female vaccination programs can substantially reduce the incidence of HPV-related diseases in women and, to some extent, men through herd immunity. The objective was to estimate the incremental benefit of vaccinating boys and girls using the quadrivalent HPV vaccine in Europe versus girls-only vaccination. Incremental benefits in terms of reduction in the incidence of HPV 6, 11, 16 and 18-related diseases (including cervical, vaginal, vulvar, anal, penile, and head and neck carcinomas and genital warts) were assessed. The analysis was performed using a model constructed in Microsoft®Excel, based on a previously-published dynamic transmission model of HPV vaccination and published European epidemiological data on incidence of HPV-related diseases. The incremental benefits of vaccinating 12-year old girls and boys versus girls-only vaccination was assessed (70% vaccine coverage were assumed for both). Sensitivity analyses around vaccine coverage and duration of protection were performed. Compared with screening alone, girls-only vaccination led to 84% reduction in HPV 16/18-related carcinomas in females and a 61% reduction in males. Vaccination of girls and boys led to a 90% reduction in HPV 16/18-related carcinomas in females and 86% reduction in males versus screening alone. Relative to a girls-only program, vaccination of girls and boys led to a reduction in female and male HPV-related carcinomas of 40% and 65%, respectively and a reduction in the incidence of HPV 6/11-related genital warts of 58% for females and 71% for males versus girls-only vaccination. In Europe, the vaccination of 12-year old boys and girls against HPV 6, 11, 16 and 18 would be associated with substantial additional clinical benefits in terms of reduced incidence of HPV-related genital warts and carcinomas versus girls-only vaccination. The incremental benefits of adding boys vaccination are

  16. Estimates of the relative specific yield of aquifers from geo-electrical ...

    African Journals Online (AJOL)

    This paper discusses a method of estimating aquifer specific yield based on surface resistivity sounding measurements supplemented with data on water conductivity. The practical aim of the method is to suggest a parallel low cost method of estimating aquifer properties. The starting point is the Archie's law, which relates ...

  17. Oscillation estimates relative to p-homogeneous forms and Kato measures data

    Directory of Open Access Journals (Sweden)

    Marco Biroli

    2006-11-01

    Full Text Available We state pointwise estimate for the positive subsolutions associated to a p-homogeneous form and nonnegative Radon measures data. As a by-product we establish an oscillation’s estimate for the solutions relative to Kato measures data.

  18. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension of the para......Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...... of the parameter vector. A new design matrix free algorithm is proposed for computing the penalized maximum likelihood estimate for GLAMs, which, in particular, handles nondifferentiable penalty functions. The proposed algorithm is implemented and available via the R package glamlasso. It combines several ideas...

  19. Models for estimating the radiation hazards of uranium mines

    International Nuclear Information System (INIS)

    Wise, K.N.

    1990-01-01

    Hazards to the health of workers in uranium mines derive from the decay products of radon and from uranium and its descendants. Radon daughters in mine atmospheres are either attached to aerosols or exist as free atoms and their physical state determines in which part of the lung the daughters deposit. The factors which influence the proportions of radon daughters attached to aerosols, their deposition in the lung and the dose received by the cells in lung tissue are discussed. The estimation of dose to tissue from inhalation of ingestion or uranium and daughters is based on a different set of models which have been applied in recent ICRP reports. The models used to describe the deposition of particulates, their movement in the gut and their uptake by organs, which form the basis for future limits on the concentration of uranium and daughters in air or on their intake with food, are outlined. 34 refs., 12 tabs., 9 figs

  20. Fast Estimation of Multinomial Logit Models: R Package mnlogit

    Directory of Open Access Journals (Sweden)

    Asad Hasan

    2016-11-01

    Full Text Available We present the R package mnlogit for estimating multinomial logistic regression models, particularly those involving a large number of categories and variables. Compared to existing software, mnlogit offers speedups of 10 - 50 times for modestly sized problems and more than 100 times for larger problems. Running in parallel mode on a multicore machine gives up to 4 times additional speedup on 8 processor cores. mnlogit achieves its computational efficiency by drastically speeding up computation of the log-likelihood function's Hessian matrix through exploiting structure in matrices that arise in intermediate calculations. This efficient exploitation of intermediate data structures allows mnlogit to utilize system memory much more efficiently, such that for most applications mnlogit requires less memory than comparable software by a factor that is proportional to the number of model categories.

  1. Functional Model to Estimate the Inelastic Displacement Ratio

    Directory of Open Access Journals (Sweden)

    Ceangu Vlad

    2017-12-01

    Full Text Available In this paper a functional model to estimate the inelastic displacement ratio as a function of the ductility factor is presented. The coefficients of the functional model are approximated using nonlinear regression. The used data is in the form of computed displacement for an inelastic single degree of freedom system with a fixed ductility factor. The inelastic seismic response spectra of constant ductility factors are used for generating data. A method for selecting ground-motions that have similar frequency content to that of the ones picked for the comparison is presented. The variability of the seismic response of nonlinear single degree of freedom systems with different hysteretic behavior is presented.

  2. Error Estimation of An Ensemble Statistical Seasonal Precipitation Prediction Model

    Science.gov (United States)

    Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Gui-Long

    2001-01-01

    This NASA Technical Memorandum describes an optimal ensemble canonical correlation forecasting model for seasonal precipitation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. Since new CCA scheme is derived for continuous fields of predictor and predictand, an area-factor is automatically included. Thus our model is an improvement of the spectral CCA scheme of Barnett and Preisendorfer. The improvements include (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States (US) precipitation field. The predictor is the sea surface temperature (SST). The US Climate Prediction Center's reconstructed SST is used as the predictor's historical data. The US National Center for Environmental Prediction's optimally interpolated precipitation (1951-2000) is used as the predictand's historical data. Our forecast experiments show that the new ensemble canonical correlation scheme renders a reasonable forecasting skill. For example, when using September-October-November SST to predict the next season December-January-February precipitation, the spatial pattern correlation between the observed and predicted are positive in 46 years among the 50 years of experiments. The positive correlations are close to or greater than 0.4 in 29 years, which indicates excellent performance of the forecasting model. The forecasting skill can be further enhanced when several predictors are used.

  3. Incorporating Latent Variables into Discrete Choice Models - A Simultaneous Estimation Approach Using SEM Software

    Directory of Open Access Journals (Sweden)

    Dirk Temme

    2008-12-01

    Full Text Available Integrated choice and latent variable (ICLV models represent a promising new class of models which merge classic choice models with the structural equation approach (SEM for latent variables. Despite their conceptual appeal, applications of ICLV models in marketing remain rare. We extend previous ICLV applications by first estimating a multinomial choice model and, second, by estimating hierarchical relations between latent variables. An empirical study on travel mode choice clearly demonstrates the value of ICLV models to enhance the understanding of choice processes. In addition to the usually studied directly observable variables such as travel time, we show how abstract motivations such as power and hedonism as well as attitudes such as a desire for flexibility impact on travel mode choice. Furthermore, we show that it is possible to estimate such a complex ICLV model with the widely available structural equation modeling package Mplus. This finding is likely to encourage more widespread application of this appealing model class in the marketing field.

  4. KONVERGENSI ESTIMATOR DALAM MODEL MIXTURE BERBASIS MISSING DATA

    Directory of Open Access Journals (Sweden)

    N Dwidayati

    2014-11-01

    Full Text Available Model mixture dapat mengestimasi proporsi pasien yang sembuh (cured dan fungsi survival pasien tak sembuh (uncured. Pada kajian ini, model mixture dikembangkan untuk  analisis cure rate berbasis missing data. Ada beberapa metode yang dapat digunakan untuk analisis missing data.  Salah satu metode yang dapat digunakan adalah Algoritma EM, Metode ini didasarkan pada dua langkah, yaitu: (1 Expectation Step dan (2 Maximization Step. Algoritma EM merupakan pendekatan iterasi untuk mempelajari model dari data dengan nilai hilang melalui empat  langkah, yaitu(1 pilih himpunan inisial dari parameter untuk sebuah model, (2 tentukan nilai ekspektasi untuk data hilang, (3 buat induksi parameter model baru dari gabungan nilai ekspekstasi dan data asli, dan (4 jika parameter tidak converged, ulangi langkah 2 menggunakan model baru. Berdasar kajian yang dilakukan dapat ditunjukkan bahwa pada algoritma EM, log-likelihood untuk missing data  mengalami kenaikan setelah dilakukan setiap iterasi dari algoritmanya. Dengan demikian berdasar algoritma EM, barisan likelihood konvergen jika likelihood terbatas ke bawah. Model mixture can estimate the proportion of recovering (cured patients and function of survival but do not recover (uncured patients. In this study, a model mixture has been developed to analyze the curing rate based on missing data. There are some methods applicable to analyze missing data. One of the methods is EM Algorithm, This method is based on two (2 steps, i.e.: ( 1 Expectation Step and ( 2 Maximization Step. EM Algorithm is an iteration approach to study the model from data with missing values in four (4 steps, i.e. (1 to choose initial set from parameters for a model, ( 2 to determine the expectation value for missing data, ( 3 to make induction for the new model parameter from the combined expectation values and the original data, and ( 4 if parameter is not converged, repeat step 2 using new model. The current study indicated that for

  5. Absolute Monotonicity of Functions Related To Estimates of First Eigenvalue of Laplace Operator on Riemannian Manifolds

    Directory of Open Access Journals (Sweden)

    Feng Qi

    2014-10-01

    Full Text Available The authors find the absolute monotonicity and complete monotonicity of some functions involving trigonometric functions and related to estimates the lower bounds of the first eigenvalue of Laplace operator on Riemannian manifolds.

  6. Microvascular glycocalyx dimension estimated by automated SDF imaging is not related to cardiovascular disease

    NARCIS (Netherlands)

    Amraoui, Fouad; Olde Engberink, Rik H. G.; van Gorp, Jacqueline; Ramdani, Amal; Vogt, Liffert; van den Born, Bert-Jan H.

    2014-01-01

    The EG regulates vascular homeostasis and has anti-atherogenic properties. SDF imaging allows for noninvasive visualization of microvessels and automated estimation of EG dimensions. We aimed to assess whether microcirculatory EG dimension is related to cardiovascular disease. Sublingual EG

  7. Significance of relative velocity in drag force or drag power estimation for a tethered float

    Digital Repository Service at National Institute of Oceanography (India)

    Vethamony, P.; Sastry, J.S.

    There is difference in opinion regarding the use of relative velocity instead of particle velocity alone in the estimation of drag force or power. In the present study, a tethered spherical float which undergoes oscillatory motion in regular waves...

  8. Network estimation in State Space Models with L1-regularization ...

    African Journals Online (AJOL)

    Microarray technologies and related methods coupled with appropriate mathematical and statistical models have made it possible to identify dynamic regulatory networks by measuring time course expression levels of many genes simultaneously. However one of the challenges is the high-dimensional nature of such data ...

  9. A model for estimating carbon accumulation in cork products

    Directory of Open Access Journals (Sweden)

    Ana C. Dias

    2014-08-01

    Full Text Available Aim of study: This study aims to develop a calculation model for estimating carbon accumulation in cork products, both whilst in use and when in landfills, and to apply the model to Portugal as an example.Area of study: The model is applicable worldwide and the case-study is Portugal.Material and methods: The model adopts a flux-data method based on a lifetime analysis and quantifies carbon accumulation in cork products according to three approaches that differ on how carbon stocks (or emissions are allocated to cork product consuming and producing countries. These approaches are: stock-change, production and atmospheric-flow. The effect on carbon balance of methane emissions from the decay of cork products in landfills is also evaluated.Main results: The model was applied to Portugal and the results show that carbon accumulation in cork products in the period between 1990 and 2010 varied between 24 and 92 Gg C year-1. The atmospheric-flow approach provided the highest carbon accumulation over the whole period due to the net export of carbon in cork products. The production approach ranked second because exported cork products were mainly manufactured from domestically produced cork. The net carbon balance in cork products was also a net carbon accumulation with all the approaches, ranging from 5 to 81 Gg C eq year-1.Research highlights: The developed model can be applied to other countries and may be a step forward to consider carbon accumulation in cork products in national greenhouse gas inventories, as well as in future climate agreements.Keywords: Atmospheric-flow approach; Greenhouse gas balance; Modelling; Production approach; Stock-change approach.

  10. A Comparison of Alternative Estimators of Linearly Aggregated Macro Models

    Directory of Open Access Journals (Sweden)

    Fikri Akdeniz

    2012-07-01

    Full Text Available Normal 0 false false false TR X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif"; mso-ansi-language:TR; mso-fareast-language:TR;} This paper deals with the linear aggregation problem. For the true underlying micro relations, which explain the micro behavior of the individuals, no restrictive rank conditions are assumed. Thus the analysis is presented in a framework utilizing generalized inverses of singular matrices. We investigate several estimators for certain linear transformations of the systematic part of the corresponding macro relations. Homogeneity of micro parameters is discussed. Best linear unbiased estimation for micro parameters is described.

  11. Using a relative health indicator (RHI) metric to estimate health risk reductions in drinking water.

    Science.gov (United States)

    Alfredo, Katherine A; Seidel, Chad; Ghosh, Amlan; Roberson, J Alan

    2017-03-01

    When a new drinking water regulation is being developed, the USEPA conducts a health risk reduction and cost analysis to, in part, estimate quantifiable and non-quantifiable cost and benefits of the various regulatory alternatives. Numerous methodologies are available for cumulative risk assessment ranging from primarily qualitative to primarily quantitative. This research developed a summary metric of relative cumulative health impacts resulting from drinking water, the relative health indicator (RHI). An intermediate level of quantification and modeling was chosen, one which retains the concept of an aggregated metric of public health impact and hence allows for comparisons to be made across "cups of water," but avoids the need for development and use of complex models that are beyond the existing state of the science. Using the USEPA Six-Year Review data and available national occurrence surveys of drinking water contaminants, the metric is used to test risk reduction as it pertains to the implementation of the arsenic and uranium maximum contaminant levels and quantify "meaningful" risk reduction. Uranium represented the threshold risk reduction against which national non-compliance risk reduction was compared for arsenic, nitrate, and radium. Arsenic non-compliance is most significant and efforts focused on bringing those non-compliant utilities into compliance with the 10 μg/L maximum contaminant level would meet the threshold for meaningful risk reduction.

  12. Relative Attitude Estimation for a Uniform Motion and Slowly Rotating Noncooperative Spacecraft

    Directory of Open Access Journals (Sweden)

    Liu Zhang

    2017-01-01

    Full Text Available This paper presents a novel relative attitude estimation approach for a uniform motion and slowly rotating noncooperative spacecraft. It is assumed that the uniform motion and slowly rotating noncooperative chief spacecraft is in failure or out of control and there is no a priori rotation rate information. We utilize a very fast binary descriptor based on binary robust independent elementary features (BRIEF to obtain the features of the target, which are rotational invariance and resistance to noise. And then, we propose a novel combination of single candidate random sample consensus (RANSAC with extended Kalman filter (EKF that makes use of the available prior probabilistic information from the EKF in the RANSAC model hypothesis stage. The advantage of this combination obviously reduces the sample size to only one, which results in large computational savings without the loss of accuracy. Experimental results from real image sequence of a real model target show that the relative angular error is about 3.5% and the mean angular velocity error is about 0.1 deg/s.

  13. Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling

    Science.gov (United States)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-04-01

    Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with

  14. Comprehensive analysis of proton range uncertainties related to stopping-power-ratio estimation using dual-energy CT imaging

    Science.gov (United States)

    Li, B.; Lee, H. C.; Duan, X.; Shen, C.; Zhou, L.; Jia, X.; Yang, M.

    2017-09-01

    The dual-energy CT-based (DECT) approach holds promise in reducing the overall uncertainty in proton stopping-power-ratio (SPR) estimation as compared to the conventional stoichiometric calibration approach. The objective of this study was to analyze the factors contributing to uncertainty in SPR estimation using the DECT-based approach and to derive a comprehensive estimate of the range uncertainty associated with SPR estimation in treatment planning. Two state-of-the-art DECT-based methods were selected and implemented on a Siemens SOMATOM Force DECT scanner. The uncertainties were first divided into five independent categories. The uncertainty associated with each category was estimated for lung, soft and bone tissues separately. A single composite uncertainty estimate was eventually determined for three tumor sites (lung, prostate and head-and-neck) by weighting the relative proportion of each tissue group for that specific site. The uncertainties associated with the two selected DECT methods were found to be similar, therefore the following results applied to both methods. The overall uncertainty (1σ) in SPR estimation with the DECT-based approach was estimated to be 3.8%, 1.2% and 2.0% for lung, soft and bone tissues, respectively. The dominant factor contributing to uncertainty in the DECT approach was the imaging uncertainties, followed by the DECT modeling uncertainties. Our study showed that the DECT approach can reduce the overall range uncertainty to approximately 2.2% (2σ) in clinical scenarios, in contrast to the previously reported 1%.

  15. Modeling, State Estimation and Control of Unmanned Helicopters

    Science.gov (United States)

    Lau, Tak Kit

    Unmanned helicopters hold both tremendous potential and challenges. Without risking the lives of human pilots, these vehicles exhibit agile movement and the ability to hover and hence open up a wide range of applications in the hazardous situations. Sparing human lives, however, comes at a stiff price for technology. Some of the key difficulties that arise in these challenges are: (i) There are unexplained cross-coupled responses between the control axes on the hingeless helicopters that have puzzled researchers for years. (ii) Most, if not all, navigation on the unmanned helicopters relies on Global Navigation Satellite Systems (GNSSs), which are susceptible to jamming. (iii) It is often necessary to accommodate the re-configurations of the payload or the actuators on the helicopters by repeatedly tuning an autopilot, and that requires intensive human supervision and/or system identification. For the dynamics modeling and analysis, we present a comprehensive review on the helicopter actuation and dynamics, and contributes toward a more complete understanding on the on-axis and off-axis dynamical responses on the helicopter. We focus on a commonly used modeling technique, namely the phase-lag treatment, and employ a first-principles modeling method to justify that (i) why that phase-lag technique is inaccurate, (ii) how we can analyze the helicopter actuation and dynamics more accurately. Moreover, these dynamics modeling and analysis reveal the hard-to-measure but crucial parameters on a helicopter model that require the constant identifications, and hence convey the reasoning of seeking a model-implicit method to solve the state estimation and control problems on the unmanned helicopters. For the state estimation, we present a robust localization method for the unmanned helicopter against the GNSS outage. This method infers position from the acceleration measurement from an inertial measurement unit (IMU). In the core of our method are techniques of the sensor

  16. Re-evaluating neonatal-age models for ungulates: does model choice affect survival estimates?

    Directory of Open Access Journals (Sweden)

    Troy W Grovenburg

    Full Text Available New-hoof growth is regarded as the most reliable metric for predicting age of newborn ungulates, but variation in estimated age among hoof-growth equations that have been developed may affect estimates of survival in staggered-entry models. We used known-age newborns to evaluate variation in age estimates among existing hoof-growth equations and to determine the consequences of that variation on survival estimates. During 2001-2009, we captured and radiocollared 174 newborn (≤24-hrs old ungulates: 76 white-tailed deer (Odocoileus virginianus in Minnesota and South Dakota, 61 mule deer (O. hemionus in California, and 37 pronghorn (Antilocapra americana in South Dakota. Estimated age of known-age newborns differed among hoof-growth models and varied by >15 days for white-tailed deer, >20 days for mule deer, and >10 days for pronghorn. Accuracy (i.e., the proportion of neonates assigned to the correct age in aging newborns using published equations ranged from 0.0% to 39.4% in white-tailed deer, 0.0% to 3.3% in mule deer, and was 0.0% for pronghorns. Results of survival modeling indicated that variability in estimates of age-at-capture affected short-term estimates of survival (i.e., 30 days for white-tailed deer and mule deer, and survival estimates over a longer time frame (i.e., 120 days for mule deer. Conversely, survival estimates for pronghorn were not affected by estimates of age. Our analyses indicate that modeling survival in daily intervals is too fine a temporal scale when age-at-capture is unknown given the potential inaccuracies among equations used to estimate age of neonates. Instead, weekly survival intervals are more appropriate because most models accurately predicted ages within 1 week of the known age. Variation among results of neonatal-age models on short- and long-term estimates of survival for known-age young emphasizes the importance of selecting an appropriate hoof-growth equation and appropriately defining intervals (i

  17. Re-evaluating neonatal-age models for ungulates: does model choice affect survival estimates?

    Science.gov (United States)

    Grovenburg, Troy W; Monteith, Kevin L; Jacques, Christopher N; Klaver, Robert W; DePerno, Christopher S; Brinkman, Todd J; Monteith, Kyle B; Gilbert, Sophie L; Smith, Joshua B; Bleich, Vernon C; Swanson, Christopher C; Jenks, Jonathan A

    2014-01-01

    New-hoof growth is regarded as the most reliable metric for predicting age of newborn ungulates, but variation in estimated age among hoof-growth equations that have been developed may affect estimates of survival in staggered-entry models. We used known-age newborns to evaluate variation in age estimates among existing hoof-growth equations and to determine the consequences of that variation on survival estimates. During 2001-2009, we captured and radiocollared 174 newborn (≤24-hrs old) ungulates: 76 white-tailed deer (Odocoileus virginianus) in Minnesota and South Dakota, 61 mule deer (O. hemionus) in California, and 37 pronghorn (Antilocapra americana) in South Dakota. Estimated age of known-age newborns differed among hoof-growth models and varied by >15 days for white-tailed deer, >20 days for mule deer, and >10 days for pronghorn. Accuracy (i.e., the proportion of neonates assigned to the correct age) in aging newborns using published equations ranged from 0.0% to 39.4% in white-tailed deer, 0.0% to 3.3% in mule deer, and was 0.0% for pronghorns. Results of survival modeling indicated that variability in estimates of age-at-capture affected short-term estimates of survival (i.e., 30 days) for white-tailed deer and mule deer, and survival estimates over a longer time frame (i.e., 120 days) for mule deer. Conversely, survival estimates for pronghorn were not affected by estimates of age. Our analyses indicate that modeling survival in daily intervals is too fine a temporal scale when age-at-capture is unknown given the potential inaccuracies among equations used to estimate age of neonates. Instead, weekly survival intervals are more appropriate because most models accurately predicted ages within 1 week of the known age. Variation among results of neonatal-age models on short- and long-term estimates of survival for known-age young emphasizes the importance of selecting an appropriate hoof-growth equation and appropriately defining intervals (i.e., weekly

  18. Uncertainty Model for Total Solar Irradiance Estimation on Australian Rooftops

    Science.gov (United States)

    Al-Saadi, Hassan; Zivanovic, Rastko; Al-Sarawi, Said

    2017-11-01

    The installations of solar panels on Australian rooftops have been in rise for the last few years, especially in the urban areas. This motivates academic researchers, distribution network operators and engineers to accurately address the level of uncertainty resulting from grid-connected solar panels. The main source of uncertainty is the intermittent nature of radiation, therefore, this paper presents a new model to estimate the total radiation incident on a tilted solar panel. Where a probability distribution factorizes clearness index, the model is driven upon clearness index with special attention being paid for Australia with the utilization of best-fit-correlation for diffuse fraction. The assessment of the model validity is achieved with the adoption of four goodness-of-fit techniques. In addition, the Quasi Monte Carlo and sparse grid methods are used as sampling and uncertainty computation tools, respectively. High resolution data resolution of solar irradiations for Adelaide city were used for this assessment, with an outcome indicating a satisfactory agreement between actual data variation and model.

  19. New aerial survey and hierarchical model to estimate manatee abundance

    Science.gov (United States)

    Langimm, Cahterine A.; Dorazio, Robert M.; Stith, Bradley M.; Doyle, Terry J.

    2011-01-01

    Monitoring the response of endangered and protected species to hydrological restoration is a major component of the adaptive management framework of the Comprehensive Everglades Restoration Plan. The endangered Florida manatee (Trichechus manatus latirostris) lives at the marine-freshwater interface in southwest Florida and is likely to be affected by hydrologic restoration. To provide managers with prerestoration information on distribution and abundance for postrestoration comparison, we developed and implemented a new aerial survey design and hierarchical statistical model to estimate and map abundance of manatees as a function of patch-specific habitat characteristics, indicative of manatee requirements for offshore forage (seagrass), inland fresh drinking water, and warm-water winter refuge. We estimated the number of groups of manatees from dual-observer counts and estimated the number of individuals within groups by removal sampling. Our model is unique in that we jointly analyzed group and individual counts using assumptions that allow probabilities of group detection to depend on group size. Ours is the first analysis of manatee aerial surveys to model spatial and temporal abundance of manatees in association with habitat type while accounting for imperfect detection. We conducted the study in the Ten Thousand Islands area of southwestern Florida, USA, which was expected to be affected by the Picayune Strand Restoration Project to restore hydrology altered for a failed real-estate development. We conducted 11 surveys in 2006, spanning the cold, dry season and warm, wet season. To examine short-term and seasonal changes in distribution we flew paired surveys 1–2 days apart within a given month during the year. Manatees were sparsely distributed across the landscape in small groups. Probability of detection of a group increased with group size; the magnitude of the relationship between group size and detection probability varied among surveys. Probability

  20. Development of a foraging model framework to reliably estimate daily food consumption by young fishes

    Science.gov (United States)

    Deslauriers, David; Rosburg, Alex J.; Chipps, Steven R.

    2017-01-01

    We developed a foraging model for young fishes that incorporates handling and digestion rate to estimate daily food consumption. Feeding trials were used to quantify functional feeding response, satiation, and gut evacuation rate. Once parameterized, the foraging model was then applied to evaluate effects of prey type, prey density, water temperature, and fish size on daily feeding rate by age-0 (19–70 mm) pallid sturgeon (Scaphirhynchus albus). Prey consumption was positively related to prey density (for fish >30 mm) and water temperature, but negatively related to prey size and the presence of sand substrate. Model evaluation results revealed good agreement between observed estimates of daily consumption and those predicted by the model (r2 = 0.95). Model simulations showed that fish feeding on Chironomidae or Ephemeroptera larvae were able to gain mass, whereas fish feeding solely on zooplankton lost mass under most conditions. By accounting for satiation and digestive processes in addition to handling time and prey density, the model provides realistic estimates of daily food consumption that can prove useful for evaluating rearing conditions for age-0 fishes.

  1. Fast joint detection-estimation of evoked brain activity in event-related FMRI using a variational approach

    Science.gov (United States)

    Chaari, Lotfi; Vincent, Thomas; Forbes, Florence; Dojat, Michel; Ciuciu, Philippe

    2013-01-01

    In standard within-subject analyses of event-related fMRI data, two steps are usually performed separately: detection of brain activity and estimation of the hemodynamic response. Because these two steps are inherently linked, we adopt the so-called region-based Joint Detection-Estimation (JDE) framework that addresses this joint issue using a multivariate inference for detection and estimation. JDE is built by making use of a regional bilinear generative model of the BOLD response and constraining the parameter estimation by physiological priors using temporal and spatial information in a Markovian model. In contrast to previous works that use Markov Chain Monte Carlo (MCMC) techniques to sample the resulting intractable posterior distribution, we recast the JDE into a missing data framework and derive a Variational Expectation-Maximization (VEM) algorithm for its inference. A variational approximation is used to approximate the Markovian model in the unsupervised spatially adaptive JDE inference, which allows automatic fine-tuning of spatial regularization parameters. It provides a new algorithm that exhibits interesting properties in terms of estimation error and computational cost compared to the previously used MCMC-based approach. Experiments on artificial and real data show that VEM-JDE is robust to model mis-specification and provides computational gain while maintaining good performance in terms of activation detection and hemodynamic shape recovery. PMID:23096056

  2. KONVERGENSI ESTIMATOR DALAM MODEL MIXTURE BERBASIS MISSING DATA

    Directory of Open Access Journals (Sweden)

    N Dwidayati

    2014-06-01

    Full Text Available Abstrak __________________________________________________________________________________________ Model mixture dapat mengestimasi proporsi pasien yang sembuh (cured dan fungsi survival pasien tak sembuh (uncured. Pada kajian ini, model mixture dikembangkan untuk  analisis cure rate berbasis missing data. Ada beberapa metode yang dapat digunakan untuk analisis missing data. Salah satu metode yang dapat digunakan adalah Algoritma EM, Metode ini didasarkan pada 2 (dua langkah, yaitu: (1 Expectation Step dan (2 Maximization Step. Algoritma EM merupakan pendekatan iterasi untuk mempelajari model dari data dengan nilai hilang melalui 4 (empat langkah, yaitu(1 pilih himpunan inisial dari parameter untuk sebuah model, (2 tentukan nilai ekspektasi untuk data hilang, (3 buat induksi parameter model baru dari gabungan nilai ekspekstasi dan data asli, dan (4 jika parameter tidak converged, ulangi langkah 2 menggunakan model baru. Berdasar kajian yang dilakukan dapat ditunjukkan bahwa pada algoritma EM, log-likelihood untuk missing data mengalami kenaikan setelah dilakukan setiap iterasi dari algoritmanya. Dengan demikian berdasar algoritma EM, barisan likelihood konvergen jika likelihood terbatas ke bawah.   Abstract __________________________________________________________________________________________ Model mixture can estimate proportion of recovering patient  and function of patient survival do not recover. At this study, model mixture developed to analyse cure rate bases on missing data. There are some method which applicable to analyse missing data. One of method which can be applied is Algoritma EM, This method based on 2 ( two step, that is: ( 1 Expectation Step and ( 2 Maximization Step. EM Algorithm is approach of iteration to study model from data with value loses through 4 ( four step, yaitu(1 select;chooses initial gathering from parameter for a model, ( 2 determines expectation value for data to lose, ( 3 induce newfangled parameter

  3. Nonlinear State Estimation and Modeling of a Helicopter UAV

    Science.gov (United States)

    Barczyk, Martin

    Experimentally-validated nonlinear flight control of a helicopter UAV has two necessary conditions: an estimate of the vehicle’s states from noisy multirate output measurements, and a nonlinear dynamics model with minimum complexity, physically controllable inputs and experimentally identified parameter values. This thesis addresses both these objectives for the Applied Nonlinear Controls Lab (ANCL)'s helicopter UAV project. A magnetometer-plus-GPS aided Inertial Navigation System (INS) for outdoor flight as well as an Attitude and Heading Reference System (AHRS) for indoor testing are designed, implemented and experimentally validated employing an Extended Kalman Filter (EKF), using a novel calibration technique for the magnetometer aiding sensor added to remove the limitations of an earlier GPS-only aiding design. Next the recently-developed nonlinear observer design methodology of invariant observers is adapted to the aided INS and AHRS examples, employing a rotation matrix representation for the state manifold to obtain designs amenable to global stability analysis, obtaining a direct nonlinear design for gains of the AHRS observer, modifying the previously-proposed Invariant EKF systematic method for computing gains, and culminating in simulation and experimental validation of the observers. Lastly a nonlinear control-oriented model of the helicopter UAV is derived from first principles, using a rigid-body dynamics formulation augmented with models of the on-board subsystems: main rotor forces and blade flapping dynamics, the Bell-Hiller system and flybar flapping dynamics, tail rotor forces, tail gyro unit, engine and rotor speed, servo operation, fuselage drag, and tail stabilizer forces. The parameter values in the resulting models are identified experimentally. Using these the model is further simplified to be tractable for model-based control design.

  4. Model ecosystem approach to estimate community level effects of radiation

    Energy Technology Data Exchange (ETDEWEB)

    Masahiro, Doi; Nobuyuki, Tanaka; Shoichi, Fuma; Nobuyoshi, Ishii; Hiroshi, Takeda; Zenichiro, Kawabata [National Institute of Radiological Sciences, Environmental and Toxicological Sciences Research Group, Chiba (Japan)

    2004-07-01

    Mathematical computer model is developed to simulate the population dynamics and dynamic mass budgets of the microbial community realized as a self sustainable aquatic ecological system in the tube. Autotrophic algae, heterotrophic protozoa and sapro-trophic bacteria live symbiotically with inter-species' interactions as predator-prey relationship, competition for the common resource, autolysis of detritus and detritus-grazing food chain, etc. The simulation model is the individual-based parallel model, built in the demographic stochasticity, environmental stochasticity by dividing the aquatic environment into patches. Validity of the model is checked by the multifaceted data of the microcosm experiments. In the analysis, intrinsic parameters of umbrella endpoints (lethality, morbidity, reproductive growth, mutation) are manipulated at the individual level, and tried to find the population level, community level and ecosystem level disorders of ecologically crucial parameters (e.g. intrinsic growth rate, carrying capacity, variation, etc.) that related to the probability of population extinction. (author)

  5. Model ecosystem approach to estimate community level effects of radiation

    International Nuclear Information System (INIS)

    Masahiro, Doi; Nobuyuki, Tanaka; Shoichi, Fuma; Nobuyoshi, Ishii; Hiroshi, Takeda; Zenichiro, Kawabata

    2004-01-01

    Mathematical computer model is developed to simulate the population dynamics and dynamic mass budgets of the microbial community realized as a self sustainable aquatic ecological system in the tube. Autotrophic algae, heterotrophic protozoa and sapro-trophic bacteria live symbiotically with inter-species' interactions as predator-prey relationship, competition for the common resource, autolysis of detritus and detritus-grazing food chain, etc. The simulation model is the individual-based parallel model, built in the demographic stochasticity, environmental stochasticity by dividing the aquatic environment into patches. Validity of the model is checked by the multifaceted data of the microcosm experiments. In the analysis, intrinsic parameters of umbrella endpoints (lethality, morbidity, reproductive growth, mutation) are manipulated at the individual level, and tried to find the population level, community level and ecosystem level disorders of ecologically crucial parameters (e.g. intrinsic growth rate, carrying capacity, variation, etc.) that related to the probability of population extinction. (author)

  6. Identifying victims of workplace bullying by integrating traditional estimation approaches into a latent class cluster model.

    Science.gov (United States)

    Leon-Perez, Jose M; Notelaers, Guy; Arenas, Alicia; Munduate, Lourdes; Medina, Francisco J

    2014-05-01

    Research findings underline the negative effects of exposure to bullying behaviors and document the detrimental health effects of being a victim of workplace bullying. While no one disputes its negative consequences, debate continues about the magnitude of this phenomenon since very different prevalence rates of workplace bullying have been reported. Methodological aspects may explain these findings. Our contribution to this debate integrates behavioral and self-labeling estimation methods of workplace bullying into a measurement model that constitutes a bullying typology. Results in the present sample (n = 1,619) revealed that six different groups can be distinguished according to the nature and intensity of reported bullying behaviors. These clusters portray different paths for the workplace bullying process, where negative work-related and person-degrading behaviors are strongly intertwined. The analysis of the external validity showed that integrating previous estimation methods into a single measurement latent class model provides a reliable estimation method of workplace bullying, which may overcome previous flaws.

  7. Estimating myocardial perfusion from dynamic contrast-enhanced CMR with a model-independent deconvolution method

    Directory of Open Access Journals (Sweden)

    Kadrmas Dan J

    2008-11-01

    Full Text Available Abstract Background Model-independent analysis with B-spline regularization has been used to quantify myocardial blood flow (perfusion in dynamic contrast-enhanced cardiovascular magnetic resonance (CMR studies. However, the model-independent approach has not been extensively evaluated to determine how the contrast-to-noise ratio between blood and tissue enhancement affects estimates of myocardial perfusion and the degree to which the regularization is dependent on the noise in the measured enhancement data. We investigated these questions with a model-independent analysis method that uses iterative minimization and a temporal smoothness regularizer. Perfusion estimates using this method were compared to results from dynamic 13N-ammonia PET. Results An iterative model-independent analysis method was developed and tested to estimate regional and pixelwise myocardial perfusion in five normal subjects imaged with a saturation recovery turboFLASH sequence at 3 T CMR. Estimates of myocardial perfusion using model-independent analysis are dependent on the choice of the regularization weight parameter, which increases nonlinearly to handle large decreases in the contrast-to-noise ratio of the measured tissue enhancement data. Quantitative perfusion estimates in five subjects imaged with 3 T CMR were 1.1 ± 0.8 ml/min/g at rest and 3.1 ± 1.7 ml/min/g at adenosine stress. The perfusion estimates correlated with dynamic 13N-ammonia PET (y = 0.90x + 0.24, r = 0.85 and were similar to results from other validated CMR studies. Conclusion This work shows that a model-independent analysis method that uses iterative minimization and temporal regularization can be used to quantify myocardial perfusion with dynamic contrast-enhanced perfusion CMR. Results from this method are robust to choices in the regularization weight parameter over relatively large ranges in the contrast-to-noise ratio of the tissue enhancement data.

  8. Model Based Analysis of the Variance Estimators for the Combined ...

    African Journals Online (AJOL)

    In this paper we study the variance estimators for the combined ratio estimator under an appropriate asymptotic framework. An alternative bias-robust variance estimator, different from that suggested by Valliant (1987), is derived. Several variance estimators are compared in an empirical study using a real population.

  9. Estimating Structural Models of Corporate Bond Prices in Indonesian Corporations

    Directory of Open Access Journals (Sweden)

    Lenny Suardi

    2014-08-01

    Full Text Available This  paper  applies  the  maximum  likelihood  (ML  approaches  to  implementing  the structural  model  of  corporate  bond,  as  suggested  by  Li  and  Wong  (2008,  in  Indonesian corporations.  Two  structural  models,  extended  Merton  and  Longstaff  &  Schwartz  (LS models,  are  used  in  determining  these  prices,  yields,  yield  spreads  and  probabilities  of default. ML estimation is used to determine the volatility of irm value. Since irm value is unobserved variable, Duan (1994 suggested that the irst step of ML estimation is to derive the likelihood function for equity as the option on the irm value. The second step is to ind parameters such as the drift and volatility of irm value, that maximizing this function. The irm value itself is extracted by equating the pricing formula to the observed equity prices. Equity,  total  liabilities,  bond  prices  data  and  the  irm's  parameters  (irm  value,  volatility of irm value, and default barrier are substituted to extended Merton and LS bond pricing formula in order to valuate the corporate bond.These models are implemented to a sample of 24 bond prices in Indonesian corporation during  period  of  2001-2005,  based  on  criteria  of  Eom,  Helwege  and  Huang  (2004.  The equity  and  bond  prices  data  were  obtained  from  Indonesia  Stock  Exchange  for  irms  that issued equity and provided regular inancial statement within this period. The result shows that both models, in average, underestimate the bond prices and overestimate the yields and yield spread. ";} // -->activate javascript

  10. Stochastic models and reliability parameter estimation applicable to nuclear power plant safety

    International Nuclear Information System (INIS)

    Mitra, S.P.

    1979-01-01

    A set of stochastic models and related estimation schemes for reliability parameters are developed. The models are applicable for evaluating reliability of nuclear power plant systems. Reliability information is extracted from model parameters which are estimated from the type and nature of failure data that is generally available or could be compiled in nuclear power plants. Principally, two aspects of nuclear power plant reliability have been investigated: (1) The statistical treatment of inplant component and system failure data; (2) The analysis and evaluation of common mode failures. The model inputs are failure data which have been classified as either the time type of failure data or the demand type of failure data. Failures of components and systems in nuclear power plant are, in general, rare events.This gives rise to sparse failure data. Estimation schemes for treating sparse data, whenever necessary, have been considered. The following five problems have been studied: 1) Distribution of sparse failure rate component data. 2) Failure rate inference and reliability prediction from time type of failure data. 3) Analyses of demand type of failure data. 4) Common mode failure model applicable to time type of failure data. 5) Estimation of common mode failures from 'near-miss' demand type of failure data

  11. A Deep Neural Network Model for Rainfall Estimation UsingPolarimetric WSR-88DP Radar Observations

    Science.gov (United States)

    Tan, H.; Chandra, C. V.; Chen, H.

    2016-12-01

    Rainfall estimation based on radar measurements has been an important topic for a few decades. Generally, radar rainfall estimation is conducted through parametric algorisms such as reflectivity-rainfall relation (i.e., Z-R relation). On the other hand, neural networks are developed for ground rainfall estimation based on radar measurements. This nonparametric method, which takes into account of both radar observations and rainfall measurements from ground rain gauges, has been demonstrated successfully for rainfall rate estimation. However, the neural network-based rainfall estimation is limited in practice due to the model complexity and structure, data quality, as well as different rainfall microphysics. Recently, the deep learning approach has been introduced in pattern recognition and machine learning areas. Compared to traditional neural networks, the deep learning based methodologies have larger number of hidden layers and more complex structure for data representation. Through a hierarchical learning process, the high level structured information and knowledge can be extracted automatically from low level features of the data. In this paper, we introduce a novel deep neural network model for rainfall estimation based on ground polarimetric radar measurements .The model is designed to capture the complex abstractions of radar measurements at different levels using multiple layers feature identification and extraction. The abstractions at different levels can be used independently or fused with other data resource such as satellite-based rainfall products and/or topographic data to represent the rain characteristics at certain location. In particular, the WSR-88DP radar and rain gauge data collected in Dallas - Fort Worth Metroplex and Florida are used extensively to train the model, and for demonstration purposes. Quantitative evaluation of the deep neural network based rainfall products will also be presented, which is based on an independent rain gauge

  12. Estimating model parameters in nonautonomous chaotic systems using synchronization

    International Nuclear Information System (INIS)

    Yang, Xiaoli; Xu, Wei; Sun, Zhongkui

    2007-01-01

    In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation

  13. Key transmission parameters of an institutional outbreak during the 1918 influenza pandemic estimated by mathematical modelling

    Directory of Open Access Journals (Sweden)

    Nelson Peter

    2006-11-01

    Full Text Available Abstract Aim To estimate the key transmission parameters associated with an outbreak of pandemic influenza in an institutional setting (New Zealand 1918. Methods Historical morbidity and mortality data were obtained from the report of the medical officer for a large military camp. A susceptible-exposed-infectious-recovered epidemiological model was solved numerically to find a range of best-fit estimates for key epidemic parameters and an incidence curve. Mortality data were subsequently modelled by performing a convolution of incidence distribution with a best-fit incidence-mortality lag distribution. Results Basic reproduction number (R0 values for three possible scenarios ranged between 1.3, and 3.1, and corresponding average latent period and infectious period estimates ranged between 0.7 and 1.3 days, and 0.2 and 0.3 days respectively. The mean and median best-estimate incidence-mortality lag periods were 6.9 and 6.6 days respectively. This delay is consistent with secondary bacterial pneumonia being a relatively important cause of death in this predominantly young male population. Conclusion These R0 estimates are broadly consistent with others made for the 1918 influenza pandemic and are not particularly large relative to some other infectious diseases. This finding suggests that if a novel influenza strain of similar virulence emerged then it could potentially be controlled through the prompt use of major public health measures.

  14. The model for estimation production cost of embroidery handicraft

    Science.gov (United States)

    Nofierni; Sriwana, IK; Septriani, Y.

    2017-12-01

    Embroidery industry is one of type of micro industry that produce embroidery handicraft. These industries are emerging in some rural areas of Indonesia. Embroidery clothing are produce such as scarves and clothes that show cultural value of certain region. The owner of an enterprise must calculate the cost of production before making a decision on how many products are received from the customer. A calculation approach to production cost analysis is needed to consider the feasibility of each order coming. This study is proposed to design the expert system (ES) in order to improve production management in the embroidery industry. The model will design used Fuzzy inference system as a model to estimate production cost. Research conducted based on survey and knowledge acquisitions from stakeholder of supply chain embroidery handicraft industry at Bukittinggi, West Sumatera, Indonesia. This paper will use fuzzy input where the quality, the complexity of the design and the working hours required and the result of the model are useful to manage production cost on embroidery production.

  15. Estimating Agricultural Losses using Flood Modeling for Rural Area

    Directory of Open Access Journals (Sweden)

    Muhadi Nur Atirah

    2017-01-01

    Full Text Available Flooding is the most significant natural hazard in Malaysia in terms of population affected, frequency, flood extent, flood duration and social economic damage. Flooding causes loss of lives, injuries, property damage and leave some economic damage to the country especially when it occurs in a rural area where the main income is dependent on agricultural area. This study focused on flooding in oil palm plantations, rubber plantations and fruits and vegetables area. InfoWorks ICM was used to develop a flood model to study the impact of flooding and to mitigate the floods using a retention pond. Later, Geographical Information System (GIS together with the flood model were used for the analysis on flood damage assessment and management of flood risk. The estimated total damage for three different flood event; 10 ARI, 50 ARI and 100 ARI involved millions of ringgits. In reducing the flood impact along the Selangor River, retention pond was suggested, modeled and tested. By constructing retention pond, flood extents in agricultural area were reduced significantly by 60.49% for 10 ARI, 45.39% for 50 ARI and 46.54% for 100 ARI.

  16. Bayesian parameter estimation for stochastic models of biological cell migration

    Science.gov (United States)

    Dieterich, Peter; Preuss, Roland

    2013-08-01

    Cell migration plays an essential role under many physiological and patho-physiological conditions. It is of major importance during embryonic development and wound healing. In contrast, it also generates negative effects during inflammation processes, the transmigration of tumors or the formation of metastases. Thus, a reliable quantification and characterization of cell paths could give insight into the dynamics of these processes. Typically stochastic models are applied where parameters are extracted by fitting models to the so-called mean square displacement of the observed cell group. We show that this approach has several disadvantages and problems. Therefore, we propose a simple procedure directly relying on the positions of the cell's trajectory and the covariance matrix of the positions. It is shown that the covariance is identical with the spatial aging correlation function for the supposed linear Gaussian models of Brownian motion with drift and fractional Brownian motion. The technique is applied and illustrated with simulated data showing a reliable parameter estimation from single cell paths.

  17. House thermal model parameter estimation method for Model Predictive Control applications

    NARCIS (Netherlands)

    van Leeuwen, Richard Pieter; de Wit, J.B.; Fink, J.; Smit, Gerardus Johannes Maria

    In this paper we investigate thermal network models with different model orders applied to various Dutch low-energy house types with high and low interior thermal mass and containing floor heating. Parameter estimations are performed by using data from TRNSYS simulations. The paper discusses results

  18. Capacitance Online Estimation Based on Adaptive Model Observer

    Directory of Open Access Journals (Sweden)

    Cen Zhaohui

    2016-01-01

    Full Text Available As a basic component in electrical and electronic devices, capacitors are very popular in electrical circuits. Conventional capacitors such as electrotype capacitors are easy to degradation, aging and fatigue due to long‐time running and outer damages such as mechanical and electrical stresses. In this paper, a novel online capacitance measurement/estimation approach is proposed. Firstly, an Adaptive Model Observer (AMO is designed based on the capacitor's circuit equations. Secondly, the AMO’s stability and convergence are analysed and discussed. Finally, Capacitors with different capacitance and different initial voltages in a buck converter topology are tested and validated. Simulation results demonstrate the effectiveness and superiority of our proposed approach.

  19. FASTSim: A Model to Estimate Vehicle Efficiency, Cost and Performance

    Energy Technology Data Exchange (ETDEWEB)

    Brooker, A.; Gonder, J.; Wang, L.; Wood, E.; Lopp, S.; Ramroth, L.

    2015-05-04

    The Future Automotive Systems Technology Simulator (FASTSim) is a high-level advanced vehicle powertrain systems analysis tool supported by the U.S. Department of Energy’s Vehicle Technologies Office. FASTSim provides a quick and simple approach to compare powertrains and estimate the impact of technology improvements on light- and heavy-duty vehicle efficiency, performance, cost, and battery batches of real-world drive cycles. FASTSim’s calculation framework and balance among detail, accuracy, and speed enable it to simulate thousands of driven miles in minutes. The key components and vehicle outputs have been validated by comparing the model outputs to test data for many different vehicles to provide confidence in the results. A graphical user interface makes FASTSim easy and efficient to use. FASTSim is freely available for download from the National Renewable Energy Laboratory’s website (see www.nrel.gov/fastsim).

  20. Model Specifications for Estimating Labor Market Returns to Associate Degrees: How Robust Are Fixed Effects Estimates? A CAPSEE Working Paper

    Science.gov (United States)

    Belfield, Clive; Bailey, Thomas

    2017-01-01

    Recently, studies have adopted fixed effects modeling to identify the returns to college. This method has the advantage over ordinary least squares estimates in that unobservable, individual-level characteristics that may bias the estimated returns are differenced out. But the method requires extensive longitudinal data and involves complex…

  1. Estimating Dynamic Connectivity States in fMRI Using Regime-Switching Factor Models

    KAUST Repository

    Ting, Chee-Ming

    2017-12-06

    We consider the challenges in estimating state-related changes in brain connectivity networks with a large number of nodes. Existing studies use sliding-window analysis or time-varying coefficient models which are unable to capture both smooth and abrupt changes simultaneously, and rely on ad-hoc approaches to the high-dimensional estimation. To overcome these limitations, we propose a Markov-switching dynamic factor model which allows the dynamic connectivity states in functional magnetic resonance imaging (fMRI) data to be driven by lower-dimensional latent factors. We specify a regime-switching vector autoregressive (SVAR) factor process to quantity the time-varying directed connectivity. The model enables a reliable, data-adaptive estimation of change-points of connectivity regimes and the massive dependencies associated with each regime. We develop a three-step estimation procedure: 1) extracting the factors using principal component analysis, 2) identifying connectivity regimes in a low-dimensional subspace based on the factor-based SVAR model, 3) constructing high-dimensional state connectivity metrics based on the subspace estimates. Simulation results show that our estimator outperforms K-means clustering of time-windowed coefficients, providing more accurate estimate of time-evolving connectivity. It achieves percentage of reduction in mean squared error by 60% when the network dimension is comparable to the sample size. When applied to resting-state fMRI data, our method successfully identifies modular organization in resting-state networks in consistency with other studies. It further reveals changes in brain states with variations across subjects and distinct large-scale directed connectivity patterns across states.

  2. In vitro activation of retinal cells: estimating location of stimulated cell by using a mathematical model

    Science.gov (United States)

    Ziv, Ofer R.; Rizzo, Joseph F., III; Jensen, Ralph J.

    2005-03-01

    Activation of neurons at different depths within the retina and at various eccentricities from the stimulating electrode will presumably influence the visual percepts created by a retinal prosthesis. With an electrical prosthesis, neurons will be activated in relation to the stimulating charge that impacts their cell membranes. The common model used to predict charge density is Coulomb's law, also known as the square law. We propose a modified model that can be used to predict neuronal depth that takes into account: (1) finite dimensions related to the position and size of the stimulating and return electrodes and (2) two-dimensional displacements of neurons with respect to the electrodes, two factors that are not considered in the square law model. We tested our model by using in vitro physiological threshold data that we had obtained previously for eight OFF-center brisk-transient rabbit retinal ganglion cells. For our most spatially dense threshold data (25 µm increments up to 100 µm from the cell body), our model estimated the depth of one RGC to be 76 ± 76 µm versus 87 ± 62 µm (median: SD) for the square law model, respectively. This difference was not statistically significant. For the seven other RGCs for which we had obtained threshold data up to 800 µm from the cell body, the estimate of the RGC depth (using data obtained along the X axis) was 96 ± 74 versus 20 ± 20 µm for the square law and our modified model, respectively. Although this difference was not statistically significant (Student t-test: p = 0.12), our model provided median values much closer to the estimated depth of these RGCs (Gt25 µm). This more realistic estimate of cell depth predicted by our model is not unexpected in this latter data set because of the more spatially distributed threshold data points that were evaluated. Our model has theoretical advantages over the traditional square law model under certain conditions, especially when considering neurons that are

  3. Estimation of unemployment rates using small area estimation model by combining time series and cross-sectional data

    Science.gov (United States)

    Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan

    2016-02-01

    Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.

  4. AN APPROACH TO THE PROBABILISTIC CORROSION RATE ESTIMATION MODEL FOR INNER BOTTOM PLATES OF BULK CARRIERS

    Directory of Open Access Journals (Sweden)

    Romeo Meštrović

    2017-01-01

    Full Text Available This paper gives an approach to the probabilistic corrosion rate estimation model for inner bottom plates of bulk carriers. Firstly, by using the data from thickness measurements for inner bottom plates for considered 25 bulk carriers, the related best fitted linear model for the corrosion wastage is obtained as a function of ship’s age. In this model it is assumed that life of coating is 4 years. The obtained related corrosion rate is equal to mm/year. Notice that the obtained linear model is a particular case of a power model proposed in some earlier investigations. In view of the fact that the corrosion rate of ship hull structures is influenced by many factors, many of an uncertain nature, in recent studies several authors investigated a probabilistic model as more appropriate to describe the expected corrosion. Motivated by these investigations, and using 2926 thickness measurements for corrosion wastage of inner bottom plates of considered 38 special ships surveys, this paper examines the cumulative density function for the corrosion rate involved in the mentioned linear model, and considered here as a continuous random variable. The obtained statistical, numerical and graphical results show that the logistic distribution or normal distribution would be well appropriate for the probabilistic corrosion rate estimation model for inner bottom plates of bulk carriers. It is believed that this fact will be confirmed with greater statistical reliability in our future investigations including many more data collected on the considered corrosion.

  5. Data assimilation within the Advanced Circulation (ADCIRC) modeling framework for the estimation of Manning's friction coefficient

    KAUST Repository

    Mayo, Talea

    2014-04-01

    Coastal ocean models play a major role in forecasting coastal inundation due to extreme events such as hurricanes and tsunamis. Additionally, they are used to model tides and currents under more moderate conditions. The models numerically solve the shallow water equations, which describe conservation of mass and momentum for processes with large horizontal length scales relative to the vertical length scales. The bottom stress terms that arise in the momentum equations can be defined through the Manning\\'s n formulation, utilizing the Manning\\'s n coefficient. The Manning\\'s n coefficient is an empirically derived, spatially varying parameter, and depends on many factors such as the bottom surface roughness. It is critical to the accuracy of coastal ocean models, however, the coefficient is often unknown or highly uncertain. In this work we reformulate a statistical data assimilation method generally used in the estimation of model state variables to estimate this model parameter. We show that low-dimensional representations of Manning\\'s n coefficients can be recovered by assimilating water elevation data. This is a promising approach to parameter estimation in coastal ocean modeling. © 2014 Elsevier Ltd.

  6. A NEW ELECTRON-DENSITY MODEL FOR ESTIMATION OF PULSAR AND FRB DISTANCES

    Energy Technology Data Exchange (ETDEWEB)

    Yao, J. M.; Wang, N. [Xinjiang Astronomical Observatory, Chinese Academy of Sciences, 150, Science 1-Street, Urumqi, Xinjiang 830011 (China); Manchester, R. N. [CSIRO Astronomy and Space Science, Australia Telescope National Facility, P.O. Box 76, Epping NSW 1710 (Australia)

    2017-01-20

    We present a new model for the distribution of free electrons in the Galaxy, the Magellanic Clouds, and the intergalactic medium (IGM) that can be used to estimate distances to real or simulated pulsars and fast radio bursts (FRBs) based on their dispersion measure (DM). The Galactic model has an extended thick disk representing the so-called warm interstellar medium, a thin disk representing the Galactic molecular ring, spiral arms based on a recent fit to Galactic H ii regions, a Galactic Center disk, and seven local features including the Gum Nebula, Galactic Loop I, and the Local Bubble. An offset of the Sun from the Galactic plane and a warp of the outer Galactic disk are included in the model. Parameters of the Galactic model are determined by fitting to 189 pulsars with independently determined distances and DMs. Simple models are used for the Magellanic Clouds and the IGM. Galactic model distances are within the uncertainty range for 86 of the 189 independently determined distances and within 20% of the nearest limit for a further 38 pulsars. We estimate that 95% of predicted Galactic pulsar distances will have a relative error of less than a factor of 0.9. The predictions of YMW16 are compared to those of the TC93 and NE2001 models showing that YMW16 performs significantly better on all measures. Timescales for pulse broadening due to interstellar scattering are estimated for (real or simulated) Galactic and Magellanic Cloud pulsars and FRBs.

  7. Modelling the broadband propagation of marine mammal echolocation clicks for click-based population density estimates.

    Science.gov (United States)

    von Benda-Beckmann, Alexander M; Thomas, Len; Tyack, Peter L; Ainslie, Michael A

    2018-02-01

    Passive acoustic monitoring with widely-dispersed hydrophones has been suggested as a cost-effective method to monitor population densities of echolocating marine mammals. This requires an estimate of the area around each receiver over which vocalizations are detected-the "effective detection area" (EDA). In the absence of auxiliary measurements enabling estimation of the EDA, it can be modelled instead. Common simplifying model assumptions include approximating the spectrum of clicks by flat energy spectra, and neglecting the frequency-dependence of sound absorption within the click bandwidth (narrowband assumption), rendering the problem amenable to solution using the sonar equation. Here, it is investigated how these approximations affect the estimated EDA and their potential for biasing the estimated density. EDA was estimated using the passive sonar equation, and by applying detectors to simulated clicks injected into measurements of background noise. By comparing model predictions made using these two approaches for different spectral energy distributions of echolocation clicks, but identical click source energy level and detector settings, EDA differed by up to a factor of 2 for Blainville's beaked whales. Both methods predicted relative density bias due to narrowband assumptions ranged from 5% to more than 100%, depending on the species, detector settings, and noise conditions.

  8. Median estimation of chemical constituents for sampling on two occasions under a log‐normal model

    Science.gov (United States)

    2015-01-01

    Sampling from a finite population on multiple occasions introduces dependencies between the successive samples when overlap is designed. Such sampling designs lead to efficient statistical estimates, while they allow estimating changes over time for the targeted outcomes. This makes them very popular in real‐world statistical practice. Sampling with partial replacement can also be very efficient in biological and environmental studies where estimation of toxicants and its trends over time is the main interest. Sampling with partial replacement is designed here on two occasions in order to estimate the median concentration of chemical constituents quantified by means of liquid chromatography coupled with tandem mass spectrometry. Such data represent relative peak areas resulting from the chromatographic analysis. They are therefore positive‐valued and skewed data, and are commonly fitted very well by the log‐normal model. A log‐normal model is assumed here for chemical constituents quantified in mainstream cigarette smoke in a real case study. Combining design‐based and model‐based approaches for statistical inference, we seek for the median estimation of chemical constituents by sampling with partial replacement on two time occasions. We also discuss the limitations of extending the proposed approach to other skewed population models. The latter is investigated by means of a Monte Carlo simulation study. PMID:26013679

  9. Parametric modeling and optimal experimental designs for estimating isobolograms for drug interactions in toxicology.

    Science.gov (United States)

    Holland-Letz, Tim; Gunkel, Nikolas; Amtmann, Eberhard; Kopp-Schneider, Annette

    2017-11-27

    In toxicology and related areas, interaction effects between two substances are commonly expressed through a combination index [Formula: see text] evaluated separately at different effect levels and mixture ratios. Often, these indices are combined into a graphical representation, the isobologram. Instead of estimating the combination indices at the experimental mixture ratios only, we propose a simple parametric model for estimating the underlying interaction function. We integrate this approach into a joint model where both the parameters of the dose-response functions of the singular substances and the interaction parameters can be estimated simultaneously. As an additional benefit, this concept allows to determine optimal statistical designs for combination studies optimizing the estimation of the interaction function as a whole. From an optimal design perspective, finding the interaction parameters generally corresponds to a [Formula: see text]-optimality resp. [Formula: see text]-optimality design problem, while estimation of all underlying dose response parameters corresponds to a [Formula: see text]-optimality design problem. We show how optimal designs can be obtained in either case as well as how combination designs providing reasonable performance in regard to both criteria can be determined by putting a constraint on the efficiency in regard to one of the criteria and optimizing for the other. As all designs require prior information about model parameter values, which may be unreliable in practice, the effect of misspecifications is investigated as well.

  10. Hierarchical Bayes Small Area Estimation under a Unit Level Model with Applications in Agriculture

    Directory of Open Access Journals (Sweden)

    Nageena Nazir

    2016-09-01

    Full Text Available To studied Bayesian aspect of small area estimation using Unit level model. In this paper we proposed and evaluated new prior distribution for the ratio of variance components in unit level model rather than uniform prior. To approximate the posterior moments of small area means, Laplace approximation method is applied. This choice of prior avoids the extreme skewness, usually present in the posterior distribution of variance components. This property leads to more accurate Laplace approximation. We apply the proposed model to the analysis of horticultural data and results from the model are compared with frequestist approach and with Bayesian model of uniform prior in terms of average relative bias, average squared relative bias and average absolute bias. The numerical results obtained highlighted the superiority of using the proposed prior over the uniform prior. Thus Bayes estimators (with new prior of small area means have good frequentist properties such as MSE and ARB as compared to other traditional methods viz., Direct, Synthetic and Composite estimators.

  11. Demographics of reintroduced populations: estimation, modeling, and decision analysis

    Science.gov (United States)

    Converse, Sarah J.; Moore, Clinton T.; Armstrong, Doug P.

    2013-01-01

    Reintroduction can be necessary for recovering populations of threatened species. However, the success of reintroduction efforts has been poorer than many biologists and managers would hope. To increase the benefits gained from reintroduction, management decision making should be couched within formal decision-analytic frameworks. Decision analysis is a structured process for informing decision making that recognizes that all decisions have a set of components—objectives, alternative management actions, predictive models, and optimization methods—that can be decomposed, analyzed, and recomposed to facilitate optimal, transparent decisions. Because the outcome of interest in reintroduction efforts is typically population viability or related metrics, models used in decision analysis efforts for reintroductions will need to include population models. In this special section of the Journal of Wildlife Management, we highlight examples of the construction and use of models for informing management decisions in reintroduced populations. In this introductory contribution, we review concepts in decision analysis, population modeling for analysis of decisions in reintroduction settings, and future directions. Increased use of formal decision analysis, including adaptive management, has great potential to inform reintroduction efforts. Adopting these practices will require close collaboration among managers, decision analysts, population modelers, and field biologists.

  12. Neural Models: An Option to Estimate Seismic Parameters of Accelerograms

    Science.gov (United States)

    Alcántara, L.; García, S.; Ovando-Shelley, E.; Macías, M. A.

    2014-12-01

    Seismic instrumentation for recording strong earthquakes, in Mexico, goes back to the 60´s due the activities carried out by the Institute of Engineering at Universidad Nacional Autónoma de México. However, it was after the big earthquake of September 19, 1985 (M=8.1) when the project of seismic instrumentation assumes a great importance. Currently, strong ground motion networks have been installed for monitoring seismic activity mainly along the Mexican subduction zone and in Mexico City. Nevertheless, there are other major regions and cities that can be affected by strong earthquakes and have not yet begun their seismic instrumentation program or this is still in development.Because of described situation some relevant earthquakes (e.g. Huajuapan de León Oct 24, 1980 M=7.1, Tehuacán Jun 15, 1999 M=7 and Puerto Escondido Sep 30, 1999 M= 7.5) have not been registered properly in some cities, like Puebla and Oaxaca, and that were damaged during those earthquakes. Fortunately, the good maintenance work carried out in the seismic network has permitted the recording of an important number of small events in those cities. So in this research we present a methodology based on the use of neural networks to estimate significant duration and in some cases the response spectra for those seismic events. The neural model developed predicts significant duration in terms of magnitude, epicenter distance, focal depth and soil characterization. Additionally, for response spectra we used a vector of spectral accelerations. For training the model we selected a set of accelerogram records obtained from the small events recorded in the strong motion instruments installed in the cities of Puebla and Oaxaca. The final results show that neural networks as a soft computing tool that use a multi-layer feed-forward architecture provide good estimations of the target parameters and they also have a good predictive capacity to estimate strong ground motion duration and response spectra.

  13. Preliminary study of subsurface temperature estimation by analyzing temperature dependent geo-electromagnetic conductivity models

    Science.gov (United States)

    Lee, S. K.; Lee, Y.; Lee, C.

    2016-12-01

    Estimation of deep temperature is significant procedure for exploration, development and sustainable use of geothermal resources in the geothermal area. For estimating subsurface temperature, there have been suggested many techniques for indirect geothermometers, such as mineral geothermometer, hydrochemical geothermometer, isotropic geothermometer, electromagnetic (EM) geothermometer and so forth. In this study, we have tested the feasibility of EM geothermometer using integrated frameworks of geothermal and geo-electromagnetic models. For this purpose, we have developed geothermal temperature model together with EM model based on common earth model, which satisfies all observed geoscientific data set including surface geology, structural geology, well log data, and geophysical data. We develop a series of plugin modules for integration of geo-electromagnetic modeling and inversion algorithms on a common geological modeling platform. The subsurface temperature with time are modeled by solving heat transfer equations using finite element method (FEM). The temperature dependent conductivity model are obtained by the temperature-conductivity relations to perform geo-electromagnetic modeling, such as magnetotelluric to analyze temperature model from EM data.

  14. A simple iterative method for estimating evapotranspiration with integrated surface/subsurface flow models

    Science.gov (United States)

    Hwang, H.-T.; Park, Y.-J.; Frey, S. K.; Berg, S. J.; Sudicky, E. A.

    2015-12-01

    This work presents an iterative, water balance based approach to estimate actual evapotranspiration (ET) with integrated surface/subsurface flow models. Traditionally, groundwater level fluctuation methods have been widely accepted and used for estimating ET and net groundwater recharge; however, in watersheds where interactions between surface and subsurface flow regimes are highly dynamic, the traditional method may be overly simplistic. Here, an innovative methodology is derived and demonstrated for using the water balance equation in conjunction with a fully-integrated surface and subsurface hydrologic model (HydroGeoSphere) in order to estimate ET at watershed and sub-watershed scales. The method invokes a simple and robust iterative numerical solution. For the proof of concept demonstrations, the method is used to estimate ET for a simple synthetic watershed and then for a real, highly-characterized 7000 km2 watershed in Southern Ontario, Canada (Grand River Watershed). The results for the Grand River Watershed show that with three to five iterations, the solution converges to a result where there is less than 1% relative error in stream flow calibration at 16 stream gauging stations. The spatially-averaged ET estimated using the iterative method shows a high level of agreement (R2 = 0.99) with that from a benchmark case simulated with an ET model embedded directly in HydroGeoSphere. The new approach presented here is applicable to any watershed that is suited for integrated surface water/groundwater flow modelling and where spatially-averaged ET estimates are useful for calibrating modelled stream discharge.

  15. Spectral estimation of soil properties in siberian tundra soils and relations with plant species composition

    DEFF Research Database (Denmark)

    Bartholomeus, Harm; Schaepman-Strub, Gabriela; Blok, Daan

    2012-01-01

    will significantly impact the global carbon cycle. We explore the potential of soil spectroscopy to estimate soil carbon properties and investigate the relation between soil properties and vegetation composition. Soil samples are collected in Siberia, and vegetation descriptions are made at each sample point. First......, but vegetation composition can be used for qualitative estimation of soil properties......., laboratory-determined soil properties are related to the spectral reflectance of wet and dried samples using partial least squares regression (PLSR) and stepwise multiple linear regression (SMLR). SMLR, using selected wavelengths related with C and N, yields high calibration accuracies for C and N. PLSR...

  16. Mechanical Models of Fault-Related Folding

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, A. M.

    2003-01-09

    The subject of the proposed research is fault-related folding and ground deformation. The results are relevant to oil-producing structures throughout the world, to understanding of damage that has been observed along and near earthquake ruptures, and to earthquake-producing structures in California and other tectonically-active areas. The objectives of the proposed research were to provide both a unified, mechanical infrastructure for studies of fault-related foldings and to present the results in computer programs that have graphical users interfaces (GUIs) so that structural geologists and geophysicists can model a wide variety of fault-related folds (FaRFs).

  17. A matlab framework for estimation of NLME models using stochastic differential equations: applications for estimation of insulin secretion rates.

    Science.gov (United States)

    Mortensen, Stig B; Klim, Søren; Dammann, Bernd; Kristensen, Niels R; Madsen, Henrik; Overgaard, Rune V

    2007-10-01

    The non-linear mixed-effects model based on stochastic differential equations (SDEs) provides an attractive residual error model, that is able to handle serially correlated residuals typically arising from structural mis-specification of the true underlying model. The use of SDEs also opens up for new tools for model development and easily allows for tracking of unknown inputs and parameters over time. An algorithm for maximum likelihood estimation of the model has earlier been proposed, and the present paper presents the first general implementation of this algorithm. The implementation is done in Matlab and also demonstrates the use of parallel computing for improved estimation times. The use of the implementation is illustrated by two examples of application which focus on the ability of the model to estimate unknown inputs facilitated by the extension to SDEs. The first application is a deconvolution-type estimation of the insulin secretion rate based on a linear two-compartment model for C-peptide measurements. In the second application the model is extended to also give an estimate of the time varying liver extraction based on both C-peptide and insulin measurements.

  18. Modeling a Longitudinal Relational Research Data Systems

    Science.gov (United States)

    Olsen, Michelle D. Hunt

    2010-01-01

    A study was conducted to propose a research-based model for a longitudinal data research system that addressed recommendations from a synthesis of literature related to: (1) needs reported by the U.S. Department of Education, (2) the twelve mandatory elements that define federally approved state longitudinal data systems (SLDS), (3) the…

  19. Models of Man in Industrial Relations Research.

    Science.gov (United States)

    Kaufman, Bruce E.; And Others

    1989-01-01

    Kaufman attempts to identify essential characteristics that distinguish behavioral from nonbehavioral research in industrial relations. He argues that they are distinguished by the psychological model of man that is contained in the theoretical framework used to deduce or test hypotheses. Comments from Lewin, Mincer, and Cummings with Kaufman's…

  20. Relating structure and dynamics in organisation models

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.

    2003-01-01

    To understand how an organisational structure relates to dynamics is an interesting fundamental challenge in the area of social modelling. Specifications of organisational structure usually have a diagrammatic form that abstracts from more detailed dynamics. Dynamic properties of agent systems, on