WorldWideScience

Sample records for estimation model final

  1. Initial and final estimates of the Bilinear seasonal time series model ...

    African Journals Online (AJOL)

    In getting the estimates of the parameters of this model special attention was paid to the problem of having good initial estimates as it is proposed that with good initial values of the parameters the estimates obtaining by the Newton-Raphson iterative technique usually not only converge but also are good estimates.

  2. Properties of parameter estimation techniques for a beta-binomial failure model. Final technical report

    International Nuclear Information System (INIS)

    Shultis, J.K.; Buranapan, W.; Eckhoff, N.D.

    1981-12-01

    Of considerable importance in the safety analysis of nuclear power plants are methods to estimate the probability of failure-on-demand, p, of a plant component that normally is inactive and that may fail when activated or stressed. Properties of five methods for estimating from failure-on-demand data the parameters of the beta prior distribution in a compound beta-binomial probability model are examined. Simulated failure data generated from a known beta-binomial marginal distribution are used to estimate values of the beta parameters by (1) matching moments of the prior distribution to those of the data, (2) the maximum likelihood method based on the prior distribution, (3) a weighted marginal matching moments method, (4) an unweighted marginal matching moments method, and (5) the maximum likelihood method based on the marginal distribution. For small sample sizes (N = or < 10) with data typical of low failure probability components, it was found that the simple prior matching moments method is often superior (e.g. smallest bias and mean squared error) while for larger sample sizes the marginal maximum likelihood estimators appear to be best

  3. Evaluation and Development of Pavement Scores, Performance Models and Needs Estimates for the TXDOT Pavement Management Information System : Final Report

    Science.gov (United States)

    2012-10-01

    This project conducted a thorough review of the existing Pavement Management Information System (PMIS) database, : performance models, needs estimates, utility curves, and scores calculations, as well as a review of District practices : concerning th...

  4. Primate polonium metabolic models and their use in estimation of systemic radiation doses from bioassay data. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, N. [New York Univ. Medical Center, Tuxedo, NY (United States). Dept. of Environmental Medicine

    1989-03-15

    A Polonium metabolic model was derived and incorporated into a Fortran algorithm which estimates the systemic radiation dose from {sup 210}Po when applied to occupational urine bioassay data. The significance of the doses estimated are examined by defining the degree of uncertainty attached to them through comprehensive statistical testing procedures. Many parameters necessary for dosimetry calculations (such as organ partition coefficients and excretion fractions), were evaluated from metabolic studies of {sup 210}Po in non-human primates. Two tamarins and six baboons were injected intravenously with {sup 210}Po citrate. Excreta and blood samples were collected. Five of the baboons were sacrificed at times ranging from 1 day to 3 months post exposure. Complete necropsies were performed and all excreta and the majority of all skeletal and tissue samples were analyzed radiochemically for their {sup 210}Po content. The {sup 210}Po excretion rate in the baboon was more rapid than in the tamarin. The biological half-time of {sup 210}Po excretion in the baboon was approximately 15 days while in the tamarin, the {sup 210}Po excretion rate was in close agreement with the 50 day biological half-time predicted by ICRP 30. Excretion fractions of {sup 210}Po in the non-human primates were found to be markedly different from data reported elsewhere in other species, including man. A thorough review of the Po urinalysis procedure showed that significant recovery losses resulted when metabolized {sup 210}Po was deposited out of raw urine. Polonium-210 was found throughout the soft tissues of the baboon but not with the partition coefficients for liver, kidneys, and spleen that are predicted by the ICRP 30 metabolic model. A fractional distribution of 0.29 for liver, 0.07 for kidneys, and 0.006 for spleen was determined. Retention times for {sup 210}Po in tissues are described by single exponential functions with biological half-times ranging from 15 to 50 days.

  5. Manual estimation of fallout casualties. Final report

    International Nuclear Information System (INIS)

    Gant, K.S.; Haaland, C.M.

    1978-08-01

    A method is described for enabling Emergency Operating Centers (EOCs) to estimate nuclear fallout casualties (fatalities and injuries) during and after nuclear attack without the aid of computers. This method is compatible with the current manual method for estimating initial weapons effects. The new technique requires that the EOCs have information on nuclear detonations and upper wind conditions and that they have maps, a protractor, map overlay material, grease pencils, worksheets, and pencils. In addition, they will need two tables of data and a fallout casualty (FC) template, all supplied in this report. Five steps are involved in the estimation of fallout casualties for an area: sketching fallout wind streamlines on a map overlay; plotting locations of nuclear detonations and their fallout streamlines; measuring crosswind and upwind distances to detonation points from the point of interest; reading radiation exposure tables and summing the contributions from different weapons to obtain the exposure at that point; and using the FC template with the protection factor profile for the area to estimate fatalities and injuries. The tables of radiation exposure are based on a modified Weapons Systems Evaluation Group-10 (WSEG-10) fallout model. The table of county protection factor profiles (PFPs) assumes a Community Shelter Plan (CSP) posture

  6. Fuel Burn Estimation Model

    Science.gov (United States)

    Chatterji, Gano

    2011-01-01

    Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.

  7. Support to LANL: Cost estimation. Final report

    International Nuclear Information System (INIS)

    1993-01-01

    This report summarizes the activities and progress by ICF Kaiser Engineers conducted on behalf of Los Alamos National Laboratories (LANL) for the US Department of Energy, Office of Waste Management (EM-33) in the area of improving methods for Cost Estimation. This work was conducted between October 1, 1992 and September 30, 1993. ICF Kaiser Engineers supported LANL in providing the Office of Waste Management with planning and document preparation services for a Cost and Schedule Estimating Guide (Guide). The intent of the Guide was to use Activity-Based Cost (ABC) estimation as a basic method in preparing cost estimates for DOE planning and budgeting documents, including Activity Data Sheets (ADSs), which form the basis for the Five Year Plan document. Prior to the initiation of the present contract with LANL, ICF Kaiser Engineers was tasked to initiate planning efforts directed toward a Guide. This work, accomplished from June to September, 1992, included visits to eight DOE field offices and consultation with DOE Headquarters staff to determine the need for a Guide, the desired contents of a Guide, and the types of ABC estimation methods and documentation requirements that would be compatible with current or potential practices and expertise in existence at DOE field offices and their contractors

  8. Estimation of standard model backgrounds to the search for electroweak production of supersymmetry in events with at least two tau leptons in the final state

    Energy Technology Data Exchange (ETDEWEB)

    Becker, Sebastian

    2013-11-04

    This thesis presents a search for Supersymmetry in events with at least two hadronically decaying tau leptons and missing transverse energy. The analysis is focused on the electro-weak production of gaugino pairs. Electroweak production processes for SUSY particles are promising candidates for the discovery of Supersymmetry with R-parity conservation at hadron-hadron colliders. For the analysis a sample of proton-proton collisions at a center of mass energy of √(s)=8 TeV with an integrated luminosity of ∫ L dt=20.3 fb{sup -1} is used. The collisions have been recorded with the ATLAS detector at the LHC in the year 2012. In two different selections the Standard Model predictions are compared with the observations. The observation of 6 events in the first selection and 14 in the second does not deviate significantly from the Standard Model with an expectation of 11 events in the first selection and 17 in the second. These results are interpreted in a phenomenological Minimal Supersymmetric Standard Model and in simplified models. For a simplified model with a chargino-neutralino pair production scenario the parameter space for masses of the lightest neutralino up to 100 GeV and up to 350 GeV for the lightest chargino mass can be excluded. For a simplified model with chargino pair production processes the parameter space for the lightest neutralino mass up to 30-50 GeV in a range for the lightest chargino mass of 170-330 GeV can be excluded. This thesis is focused on the estimation techniques of Standard Model background processes. Different methods for the estimation of the background originating from Z-boson and top-quark decays are investigated.

  9. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...

  10. A random field model for the estimation of seismic hazard. Final report for the period 1 January 1990 - 31 December 1990

    International Nuclear Information System (INIS)

    Yucemen, S.

    1991-02-01

    The general theory of stationary random functions is utilized to assess the seismic hazard associated with a linearly extending seismic source. The past earthquake occurrence data associated with a portion of the North Anatolian fault are used to demonstrate the implementation of the proposed model. 18 refs, figs and tabs

  11. A random field model for the estimation of seismic hazard. Final report for the period 1 January 1990 - 31 December 1990

    Energy Technology Data Exchange (ETDEWEB)

    Yucemen, S [Middle East Technical Univ., Ankara (Turkey). Dept. of Statistics

    1991-02-01

    The general theory of stationary random functions is utilized to assess the seismic hazard associated with a linearly extending seismic source. The past earthquake occurrence data associated with a portion of the North Anatolian fault are used to demonstrate the implementation of the proposed model. 18 refs, figs and tabs.

  12. Integrated Environmental Assessment Modelling. Estimates of Emission and Measurement of Impacts. Final Report of the Spanish Subproject EU/Life Project

    International Nuclear Information System (INIS)

    Guardans, R.; Gimeno, B.S.; Bermejo, V.; Elvira, S.; Martin, F.; Palacios, M.; Rodriguez, E.; Donaire, I.

    2000-01-01

    This report describes the results of the Spanish participation in the project Coupling CORINAIR data to cost-effective emission reduction strategies based on critical thresholds. (EU/LIFE97/ENV/ FIN/336). The sub project has focused on three tasks: Develop tools to improve knowledge on the spatial and temporal details of emissions of air pollutants in Spain, Exploit existing experimental information on plant response to air pollutants in temperate ecosystems and Integrate these findings in a modelling framework that can asses with more accuracy the impact of air pollutants to temperate ecosystems. The results obtained during the execution of this project have significantly improved the models of the impact of alterative emission control strategies on ecosystems and crops in the Iberian Peninsula. (Author) 375 refs

  13. Coupling of Realistic Rate Estimates with Genomics for Assessing Contaminant Attenuation and Long-Term Plume Containment - Task 4: Modeling - Final Report

    International Nuclear Information System (INIS)

    Robert C. Starr

    2005-01-01

    seven plumes at 24 DOE facilities were screened, and 14 plumes were selected for detailed examination. In the plumes selected for further study, spatial changes in the concentration of a conservative co-contaminant were used to compensate for the effects of mixing and temporal changes in TCE release from the contaminant source. Decline in TCE concentration along a flow path in excess of the co contaminant concentration decline was attributed to cometabolic degradation. This study indicated that TCE was degraded in 9 of the 14 plumes examined, with first order degradation half-lives ranging from about 1 to 12 years. TCE degradation in about two-thirds of the plumes examined suggests that cometabolism of TCE in aerobic groundwater is a common occurrence, in contrast to the conventional wisdom that TCE is recalcitrant in aerobic groundwater. The degradation half-life values calculated in this study are short enough that natural attenuation may be a viable remedy in many aerobic plumes. Computer modeling of groundwater flow and contaminant transport and degradation is frequently used to predict the evolution of groundwater plumes, and for evaluating natural attenuation and other remedial alternatives. An important aspect of a computer model is the mathematical approach for describing degradation kinetics. A common approach is to assume that degradation occurs as a first-order process. First order kinetics are easily incorporated into transport models and require only a single value (a degradation half-life) to describe reaction kinetics. The use of first order kinetics is justified in many cases because more elaborate kinetic equations often closely approximate first order kinetics under typical field conditions. A previous modeling study successfully simulated the INL TCE plume using first order degradation kinetics. TCE cometabolism is the result of TCE reacting with microbial enzymes that were produced for other purposes, such as oxidizing a growth substrate to obtain

  14. Estimation of the statistical distribution of faulting in selected areas and the design of an exploration model to detect these faults. Final research report

    International Nuclear Information System (INIS)

    Brooke, J.P.

    1977-11-01

    Selected sites in the United States have been analyzed geomathematically as a part of the technical support program to develop site suitability criteria for High Level Nuclear Waste (HLW) repositories. Using published geological maps and other information, statistical evaluations of the fault patterns and other significant geological features have been completed for 16 selected localities. The observed frequency patterns were compared to theoretical patterns in order to obtain a predictive model for faults at each location. In general, the patterns approximate an exponential distribution function with the exception of Edinburgh, Scotland--the control area. The fault pattern of rocks at Edinburgh closely approximate a negative binominal frequency distribution. The range of fault occurrences encountered during the investigation varied from a low of 0.15 to a high of 10 faults per square mile. Faulting is only one factor in the overall geological evaluation of HLW sites. A general exploration program plan to aid in investigating HLW respository sites has been completed using standard mineral exploration techniques. For the preliminary examination of the suitability of potential sites, present economic conditions indicate the scanning and reconnaissance exploration stages will cost approximately $1,000,000. These would proceed in a logical sequence so that the site selected optimizes the geological factors. The reconnaissance stage of mineral exploration normally utilizes ''saturation geophysics'' to obtain complete geological information. This approach is recommended in the preliminary HLW site investigation process as the most economical and rewarding. Exploration games have been designed for potential sites in the eastern and the western U.S. The game matrix approach is recommended as a suitable technique for the allocation of resources in a search problem during this preliminary phase

  15. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    Science.gov (United States)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  16. Software Cost-Estimation Model

    Science.gov (United States)

    Tausworthe, R. C.

    1985-01-01

    Software Cost Estimation Model SOFTCOST provides automated resource and schedule model for software development. Combines several cost models found in open literature into one comprehensive set of algorithms. Compensates for nearly fifty implementation factors relative to size of task, inherited baseline, organizational and system environment and difficulty of task.

  17. Repository simulation model: Final report

    International Nuclear Information System (INIS)

    1988-03-01

    This report documents the application of computer simulation for the design analysis of the nuclear waste repository's waste handling and packaging operations. The Salt Repository Simulation Model was used to evaluate design alternatives during the conceptual design phase of the Salt Repository Project. Code development and verification was performed by the Office of Nuclear Waste Isolation (ONWL). The focus of this report is to relate the experience gained during the development and application of the Salt Repository Simulation Model to future repository design phases. Design of the repository's waste handling and packaging systems will require sophisticated analysis tools to evaluate complex operational and logistical design alternatives. Selection of these design alternatives in the Advanced Conceptual Design (ACD) and License Application Design (LAD) phases must be supported by analysis to demonstrate that the repository design will cost effectively meet DOE's mandated emplacement schedule and that uncertainties in the performance of the repository's systems have been objectively evaluated. Computer simulation of repository operations will provide future repository designers with data and insights that no other analytical form of analysis can provide. 6 refs., 10 figs

  18. Estimating the transmission potential of supercritical processes based on the final size distribution of minor outbreaks.

    Science.gov (United States)

    Nishiura, Hiroshi; Yan, Ping; Sleeman, Candace K; Mode, Charles J

    2012-02-07

    Use of the final size distribution of minor outbreaks for the estimation of the reproduction numbers of supercritical epidemic processes has yet to be considered. We used a branching process model to derive the final size distribution of minor outbreaks, assuming a reproduction number above unity, and applying the method to final size data for pneumonic plague. Pneumonic plague is a rare disease with only one documented major epidemic in a spatially limited setting. Because the final size distribution of a minor outbreak needs to be normalized by the probability of extinction, we assume that the dispersion parameter (k) of the negative-binomial offspring distribution is known, and examine the sensitivity of the reproduction number to variation in dispersion. Assuming a geometric offspring distribution with k=1, the reproduction number was estimated at 1.16 (95% confidence interval: 0.97-1.38). When less dispersed with k=2, the maximum likelihood estimate of the reproduction number was 1.14. These estimates agreed with those published from transmission network analysis, indicating that the human-to-human transmission potential of the pneumonic plague is not very high. Given only minor outbreaks, transmission potential is not sufficiently assessed by directly counting the number of offspring. Since the absence of a major epidemic does not guarantee a subcritical process, the proposed method allows us to conservatively regard epidemic data from minor outbreaks as supercritical, and yield estimates of threshold values above unity. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  19. Calculation of extreme wind atlases using mesoscale modeling. Final report

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Badger, Jake

    This is the final report of the project PSO-10240 "Calculation of extreme wind atlases using mesoscale modeling". The overall objective is to improve the estimation of extreme winds by developing and applying new methodologies to confront the many weaknesses in the current methodologies as explai...

  20. Automotive advertising copy test. Final report. [Mileage estimates

    Energy Technology Data Exchange (ETDEWEB)

    1984-10-01

    The purpose of this research project was to explore the following issues: (1) mileage recall/recognition of miles per gallon/highway mileage estimates in print ads by advertisement readers; (2) determine consumer expectations and believability of advertised mileage guidelines; (3) measure recall/comprehension of mileage disclaimers; and (4) determine how consumers utilize published mileage estimates. The evidence from this study points to a public which is quite familiar with the EPA mileage estimates, in terms of using them as guidelines and in finding them to be helpful. Most adults also appear to be knowledgeable about factors which can affect car performance and, therefore, anticipate that, within certain tolerances, their actual mileage will differ from the EPA estimates. Although the consumer has been educated regarding fuel estimates, there is a very strong suggestion from this research that typical automobile print advertising does a less than an effective job in generating awareness of specific EPA estimates as well as their attendant disclaimer. Copy strategy and execution have a critical impact on recall of the EPA mileage estimates. 18 tables.

  1. Photovoltaic subsystem marketing and distribution model: programming manual. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1982-07-01

    Complete documentation of the marketing and distribution (M and D) computer model is provided. The purpose is to estimate the costs of selling and transporting photovoltaic solar energy products from the manufacturer to the final customer. The model adjusts for the inflation and regional differences in marketing and distribution costs. The model consists of three major components: the marketing submodel, the distribution submodel, and the financial submodel. The computer program is explained including the input requirements, output reports, subprograms and operating environment. The program specifications discuss maintaining the validity of the data and potential improvements. An example for a photovoltaic concentrator collector demonstrates the application of the model.

  2. Model for traffic emissions estimation

    Science.gov (United States)

    Alexopoulos, A.; Assimacopoulos, D.; Mitsoulis, E.

    A model is developed for the spatial and temporal evaluation of traffic emissions in metropolitan areas based on sparse measurements. All traffic data available are fully employed and the pollutant emissions are determined with the highest precision possible. The main roads are regarded as line sources of constant traffic parameters in the time interval considered. The method is flexible and allows for the estimation of distributed small traffic sources (non-line/area sources). The emissions from the latter are assumed to be proportional to the local population density as well as to the traffic density leading to local main arteries. The contribution of moving vehicles to air pollution in the Greater Athens Area for the period 1986-1988 is analyzed using the proposed model. Emissions and other related parameters are evaluated. Emissions from area sources were found to have a noticeable share of the overall air pollution.

  3. Los Alamos Waste Management Cost Estimation Model

    International Nuclear Information System (INIS)

    Matysiak, L.M.; Burns, M.L.

    1994-03-01

    This final report completes the Los Alamos Waste Management Cost Estimation Project, and includes the documentation of the waste management processes at Los Alamos National Laboratory (LANL) for hazardous, mixed, low-level radioactive solid and transuranic waste, development of the cost estimation model and a user reference manual. The ultimate goal of this effort was to develop an estimate of the life cycle costs for the aforementioned waste types. The Cost Estimation Model is a tool that can be used to calculate the costs of waste management at LANL for the aforementioned waste types, under several different scenarios. Each waste category at LANL is managed in a separate fashion, according to Department of Energy requirements and state and federal regulations. The cost of the waste management process for each waste category has not previously been well documented. In particular, the costs associated with the handling, treatment and storage of the waste have not been well understood. It is anticipated that greater knowledge of these costs will encourage waste generators at the Laboratory to apply waste minimization techniques to current operations. Expected benefits of waste minimization are a reduction in waste volume, decrease in liability and lower waste management costs

  4. Construction and Demolition Debris 2014 US Final Disposition Estimates Using the CDDPath Method

    Data.gov (United States)

    U.S. Environmental Protection Agency — Estimates of the final amount and final disposition of materials generated in the Construction and Demolition waste stream measured in total mass of each material....

  5. Estimation of intermediate grade uranium resources. Final report

    International Nuclear Information System (INIS)

    Lambie, F.W.; Kendall, G.R.; Klahn, L.J.; Davis, J.C.; Harbaugh, J.W.

    1980-12-01

    The purpose of this project is to analyze the technique currently used by DOE to estimate intermediate grade uranium (0.01 to 0.05% U 3 O 8 ) and, if possible, suggest alternatives to improve the accuracy and precision of the estimate. There are three principal conclusions resulting from this study. They relate to the quantity, distribution and sampling of intermediate grade uranium. While the results of this study must be validated further, they indicate that DOE may be underestimating intermediate level reserves by 20 to 30%. Plots of grade of U 3 O 8 versus tonnage of ore and tonnage U 3 O 8 indicate grade-tonnage relationships that are essentially log-linear, at least down to 0.01% U 3 O 8 . Though this is not an unexpected finding, it may provide a technique for reducing the uncertainty of intermediate grade endowment. The results of this study indicate that a much lower drill hole density is necessary for DOE to estimate uranium resources than for a mining company to calculate ore resources. Though errors in local estimates will occur, they will tend to cancel over the entire deposit

  6. Temperature Buffer Test. Final THM modelling

    International Nuclear Information System (INIS)

    Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan; Ledesma, Alberto; Jacinto, Abel

    2012-01-01

    The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code B right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code B right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to

  7. Temperature Buffer Test. Final THM modelling

    Energy Technology Data Exchange (ETDEWEB)

    Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan [Clay Technology AB, Lund (Sweden); Ledesma, Alberto; Jacinto, Abel [UPC, Universitat Politecnica de Catalunya, Barcelona (Spain)

    2012-01-15

    The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code{sub B}right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code{sub B}right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to

  8. PARAMETER ESTIMATION IN BREAD BAKING MODEL

    Directory of Open Access Journals (Sweden)

    Hadiyanto Hadiyanto

    2012-05-01

    Full Text Available Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally product quality parameters. There was a fair agreement between the calibrated model results and the experimental data. The results showed that the applied simple qualitative relationships for quality performed above expectation. Furthermore, it was confirmed that the microwave input is most meaningful for the internal product properties and not for the surface properties as crispness and color. The model with adjusted parameters was applied in a quality driven food process design procedure to derive a dynamic operation pattern, which was subsequently tested experimentally to calibrate the model. Despite the limited calibration with fixed operation settings, the model predicted well on the behavior under dynamic convective operation and on combined convective and microwave operation. It was expected that the suitability between model and baking system could be improved further by performing calibration experiments at higher temperature and various microwave power levels.  Abstrak  PERKIRAAN PARAMETER DALAM MODEL UNTUK PROSES BAKING ROTI. Kualitas produk roti sangat tergantung pada proses baking yang digunakan. Suatu model yang telah dikembangkan dengan metode kualitatif dan kuantitaif telah dikalibrasi dengan percobaan pada temperatur 200oC dan dengan kombinasi dengan mikrowave pada 100 Watt. Parameter-parameter model diestimasi dengan prosedur bertahap yaitu pertama, parameter pada model perpindahan masa dan panas, parameter pada model transformasi, dan

  9. Nonparametric estimation in models for unobservable heterogeneity

    OpenAIRE

    Hohmann, Daniel

    2014-01-01

    Nonparametric models which allow for data with unobservable heterogeneity are studied. The first publication introduces new estimators and their asymptotic properties for conditional mixture models. The second publication considers estimation of a function from noisy observations of its Radon transform in a Gaussian white noise model.

  10. MCMC estimation of multidimensional IRT models

    NARCIS (Netherlands)

    Beguin, Anton; Glas, Cornelis A.W.

    1998-01-01

    A Bayesian procedure to estimate the three-parameter normal ogive model and a generalization to a model with multidimensional ability parameters are discussed. The procedure is a generalization of a procedure by J. Albert (1992) for estimating the two-parameter normal ogive model. The procedure will

  11. Model-based estimation for dynamic cardiac studies using ECT

    International Nuclear Information System (INIS)

    Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.; Fessler, J.A.; Hero, A.O.

    1994-01-01

    In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed

  12. Model-based estimation for dynamic cardiac studies using ECT.

    Science.gov (United States)

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  13. Estimating Canopy Dark Respiration for Crop Models

    Science.gov (United States)

    Monje Mejia, Oscar Alberto

    2014-01-01

    Crop production is obtained from accurate estimates of daily carbon gain.Canopy gross photosynthesis (Pgross) can be estimated from biochemical models of photosynthesis using sun and shaded leaf portions and the amount of intercepted photosyntheticallyactive radiation (PAR).In turn, canopy daily net carbon gain can be estimated from canopy daily gross photosynthesis when canopy dark respiration (Rd) is known.

  14. Improved diagnostic model for estimating wind energy

    Energy Technology Data Exchange (ETDEWEB)

    Endlich, R.M.; Lee, J.D.

    1983-03-01

    Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.

  15. On parameter estimation in deformable models

    DEFF Research Database (Denmark)

    Fisker, Rune; Carstensen, Jens Michael

    1998-01-01

    Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian form...

  16. Modeling and estimating system availability

    International Nuclear Information System (INIS)

    Gaver, D.P.; Chu, B.B.

    1976-11-01

    Mathematical models to infer the availability of various types of more or less complicated systems are described. The analyses presented are probabilistic in nature and consist of three parts: a presentation of various analytic models for availability; a means of deriving approximate probability limits on system availability; and a means of statistical inference of system availability from sparse data, using a jackknife procedure. Various low-order redundant systems are used as examples, but extension to more complex systems is not difficult

  17. Parameter Estimation of Partial Differential Equation Models.

    Science.gov (United States)

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab

    2013-01-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.

  18. Amplitude Models for Discrimination and Yield Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, William Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-01

    This seminar presentation describes amplitude models and yield estimations that look at the data in order to inform legislation. The following points were brought forth in the summary: global models that will predict three-component amplitudes (R-T-Z) were produced; Q models match regional geology; corrected source spectra can be used for discrimination and yield estimation; three-component data increase coverage and reduce scatter in source spectral estimates; three-component efforts must include distance-dependent effects; a community effort on instrument calibration is needed.

  19. A final size relation for epidemic models of vector-transmitted diseases

    OpenAIRE

    Fred Brauer

    2017-01-01

    We formulate and analyze an age of infection model for epidemics of diseases transmitted by a vector, including the possibility of direct transmission as well. We show how to determine a basic reproduction number. While there is no explicit final size relation as for diseases transmitted directly, we are able to obtain estimates for the final size of the epidemic.

  20. NEW MODEL FOR SOLAR RADIATION ESTIMATION FROM ...

    African Journals Online (AJOL)

    NEW MODEL FOR SOLAR RADIATION ESTIMATION FROM MEASURED AIR TEMPERATURE AND ... Nigerian Journal of Technology ... Solar radiation measurement is not sufficient in Nigeria for various reasons such as maintenance and ...

  1. Parameter Estimation of Nonlinear Models in Forestry.

    OpenAIRE

    Fekedulegn, Desta; Mac Siúrtáin, Máirtín Pádraig; Colbert, Jim J.

    1999-01-01

    Partial derivatives of the negative exponential, monomolecular, Mitcherlich, Gompertz, logistic, Chapman-Richards, von Bertalanffy, Weibull and the Richard’s nonlinear growth models are presented. The application of these partial derivatives in estimating the model parameters is illustrated. The parameters are estimated using the Marquardt iterative method of nonlinear regression relating top height to age of Norway spruce (Picea abies L.) from the Bowmont Norway Spruce Thinnin...

  2. INTEGRATED SPEED ESTIMATION MODEL FOR MULTILANE EXPREESSWAYS

    Science.gov (United States)

    Hong, Sungjoon; Oguchi, Takashi

    In this paper, an integrated speed-estimation model is developed based on empirical analyses for the basic sections of intercity multilane expressway un der the uncongested condition. This model enables a speed estimation for each lane at any site under arb itrary highway-alignment, traffic (traffic flow and truck percentage), and rainfall conditions. By combin ing this model and a lane-use model which estimates traffic distribution on the lanes by each vehicle type, it is also possible to es timate an average speed across all the lanes of one direction from a traffic demand by vehicle type under specific highway-alignment and rainfall conditions. This model is exp ected to be a tool for the evaluation of traffic performance for expressways when the performance me asure is travel speed, which is necessary for Performance-Oriented Highway Planning and Design. Regarding the highway-alignment condition, two new estimators, called effective horizo ntal curvature and effective vertical grade, are proposed in this paper which take into account the influence of upstream and downstream alignment conditions. They are applied to the speed-estimation model, and it shows increased accuracy of the estimation.

  3. Fleet replacement modeling : final report, July 2009.

    Science.gov (United States)

    2009-07-01

    This project focused on two interrelated areas in equipment replacement modeling for fleets. The first area was research-oriented and addressed a fundamental assumption in engineering economic replacement modeling that all assets providing a similar ...

  4. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss...... in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...

  5. Semi-parametric estimation for ARCH models

    Directory of Open Access Journals (Sweden)

    Raed Alzghool

    2018-03-01

    Full Text Available In this paper, we conduct semi-parametric estimation for autoregressive conditional heteroscedasticity (ARCH model with Quasi likelihood (QL and Asymptotic Quasi-likelihood (AQL estimation methods. The QL approach relaxes the distributional assumptions of ARCH processes. The AQL technique is obtained from the QL method when the process conditional variance is unknown. We present an application of the methods to a daily exchange rate series. Keywords: ARCH model, Quasi likelihood (QL, Asymptotic Quasi-likelihood (AQL, Martingale difference, Kernel estimator

  6. Final Project Report Load Modeling Transmission Research

    Energy Technology Data Exchange (ETDEWEB)

    Lesieutre, Bernard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bravo, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yinger, Robert [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chassin, Dave [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Huang, Henry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lu, Ning [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hiskens, Ian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Venkataramanan, Giri [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-03-31

    The research presented in this report primarily focuses on improving power system load models to better represent their impact on system behavior. The previous standard load model fails to capture the delayed voltage recovery events that are observed in the Southwest and elsewhere. These events are attributed to stalled air conditioner units after a fault. To gain a better understanding of their role in these events and to guide modeling efforts, typical air conditioner units were testing in laboratories. Using data obtained from these extensive tests, new load models were developed to match air conditioner behavior. An air conditioner model is incorporated in the new WECC composite load model. These models are used in dynamic studies of the West and can impact power transfer limits for California. Unit-level and systemlevel solutions are proposed as potential solutions to the delayed voltage recovery problem.

  7. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei

    2013-09-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  8. Mathematical models for atmospheric pollutants. Final report

    International Nuclear Information System (INIS)

    Drake, R.L.; Barrager, S.M.

    1979-08-01

    The present and likely future roles of mathematical modeling in air quality decisions are described. The discussion emphasizes models and air pathway processes rather than the chemical and physical behavior of specific anthropogenic emissions. Summarized are the characteristics of various types of models used in the decision-making processes. Specific model subclasses are recommended for use in making air quality decisions that have site-specific, regional, national, or global impacts. The types of exposure and damage models that are currently used to predict the effects of air pollutants on humans, other animals, plants, ecosystems, property, and materials are described. The aesthetic effects of odor and visibility and the impact of pollutants on weather and climate are also addressed. Technical details of air pollution meteorology, chemical and physical properties of air pollutants, solution techniques, and air quality models are discussed in four appendices bound in separate volumes

  9. Modeling, estimation and optimal filtration in signal processing

    CERN Document Server

    Najim, Mohamed

    2010-01-01

    The purpose of this book is to provide graduate students and practitioners with traditional methods and more recent results for model-based approaches in signal processing.Firstly, discrete-time linear models such as AR, MA and ARMA models, their properties and their limitations are introduced. In addition, sinusoidal models are addressed.Secondly, estimation approaches based on least squares methods and instrumental variable techniques are presented.Finally, the book deals with optimal filters, i.e. Wiener and Kalman filtering, and adaptive filters such as the RLS, the LMS and the

  10. Conditional shape models for cardiac motion estimation

    DEFF Research Database (Denmark)

    Metz, Coert; Baka, Nora; Kirisli, Hortense

    2010-01-01

    We propose a conditional statistical shape model to predict patient specific cardiac motion from the 3D end-diastolic CTA scan. The model is built from 4D CTA sequences by combining atlas based segmentation and 4D registration. Cardiac motion estimation is, for example, relevant in the dynamic...

  11. FUZZY MODELING BY SUCCESSIVE ESTIMATION OF RULES ...

    African Journals Online (AJOL)

    This paper presents an algorithm for automatically deriving fuzzy rules directly from a set of input-output data of a process for the purpose of modeling. The rules are extracted by a method termed successive estimation. This method is used to generate a model without truncating the number of fired rules, to within user ...

  12. Robust estimation for ordinary differential equation models.

    Science.gov (United States)

    Cao, J; Wang, L; Xu, J

    2011-12-01

    Applied scientists often like to use ordinary differential equations (ODEs) to model complex dynamic processes that arise in biology, engineering, medicine, and many other areas. It is interesting but challenging to estimate ODE parameters from noisy data, especially when the data have some outliers. We propose a robust method to address this problem. The dynamic process is represented with a nonparametric function, which is a linear combination of basis functions. The nonparametric function is estimated by a robust penalized smoothing method. The penalty term is defined with the parametric ODE model, which controls the roughness of the nonparametric function and maintains the fidelity of the nonparametric function to the ODE model. The basis coefficients and ODE parameters are estimated in two nested levels of optimization. The coefficient estimates are treated as an implicit function of ODE parameters, which enables one to derive the analytic gradients for optimization using the implicit function theorem. Simulation studies show that the robust method gives satisfactory estimates for the ODE parameters from noisy data with outliers. The robust method is demonstrated by estimating a predator-prey ODE model from real ecological data. © 2011, The International Biometric Society.

  13. Statistical Model-Based Face Pose Estimation

    Institute of Scientific and Technical Information of China (English)

    GE Xinliang; YANG Jie; LI Feng; WANG Huahua

    2007-01-01

    A robust face pose estimation approach is proposed by using face shape statistical model approach and pose parameters are represented by trigonometric functions. The face shape statistical model is firstly built by analyzing the face shapes from different people under varying poses. The shape alignment is vital in the process of building the statistical model. Then, six trigonometric functions are employed to represent the face pose parameters. Lastly, the mapping function is constructed between face image and face pose by linearly relating different parameters. The proposed approach is able to estimate different face poses using a few face training samples. Experimental results are provided to demonstrate its efficiency and accuracy.

  14. Direct Importance Estimation with Gaussian Mixture Models

    Science.gov (United States)

    Yamada, Makoto; Sugiyama, Masashi

    The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.

  15. Respiratory trace deposition models. Final report

    International Nuclear Information System (INIS)

    Yeh, H.C.

    1980-03-01

    Respiratory tract characteristics of four mammalian species (human, dog, rat and Syrian hamster) were studied, using replica lung casts. An in situ casting techniques was developed for making the casts. Based on an idealized branch model, over 38,000 records of airway segment diameters, lengths, branching angles and gravity angles were obtained from measurements of two humans, two Beagle dogs, two rats and one Syrian hamster. From examination of the trimmed casts and morphometric data, it appeared that the structure of the human airway is closer to a dichotomous structure, whereas for dog, rat and hamster, it is monopodial. Flow velocity in the trachea and major bronchi in living Beagle dogs was measured using an implanted, subminiaturized, heated film anemometer. A physical model was developed to simulate the regional deposition characteristics proposed by the Task Group on Lung Dynamics of the ICRP. Various simulation modules for the nasopharyngeal (NP), tracheobronchial (TB) and pulmonary (P) compartments were designed and tested. Three types of monodisperse aerosols were developed for animal inhalation studies. Fifty Syrian hamsters and 50 rats were exposed to five different sizes of monodisperse fused aluminosilicate particles labeled with 169 Yb. Anatomical lung models were developed for four species (human, Beagle dog, rat and Syrian hamster) that were based on detailed morphometric measurements of replica lung casts. Emphasis was placed on developing a lobar typical-path lung model and on developing a modeling technique which could be applied to various mammalian species. A set of particle deposition equations for deposition caused by inertial impaction, sedimentation, and diffusion were developed. Theoretical models of particle deposition were developed based on these equations and on the anatomical lung models

  16. Thresholding projection estimators in functional linear models

    OpenAIRE

    Cardot, Hervé; Johannes, Jan

    2010-01-01

    We consider the problem of estimating the regression function in functional linear regression models by proposing a new type of projection estimators which combine dimension reduction and thresholding. The introduction of a threshold rule allows to get consistency under broad assumptions as well as minimax rates of convergence under additional regularity hypotheses. We also consider the particular case of Sobolev spaces generated by the trigonometric basis which permits to get easily mean squ...

  17. Clock error models for simulation and estimation

    International Nuclear Information System (INIS)

    Meditch, J.S.

    1981-10-01

    Mathematical models for the simulation and estimation of errors in precision oscillators used as time references in satellite navigation systems are developed. The results, based on all currently known oscillator error sources, are directly implementable on a digital computer. The simulation formulation is sufficiently flexible to allow for the inclusion or exclusion of individual error sources as desired. The estimation algorithms, following from Kalman filter theory, provide directly for the error analysis of clock errors in both filtering and prediction

  18. Estimation and uncertainty of reversible Markov models.

    Science.gov (United States)

    Trendelkamp-Schroer, Benjamin; Wu, Hao; Paul, Fabian; Noé, Frank

    2015-11-07

    Reversibility is a key concept in Markov models and master-equation models of molecular kinetics. The analysis and interpretation of the transition matrix encoding the kinetic properties of the model rely heavily on the reversibility property. The estimation of a reversible transition matrix from simulation data is, therefore, crucial to the successful application of the previously developed theory. In this work, we discuss methods for the maximum likelihood estimation of transition matrices from finite simulation data and present a new algorithm for the estimation if reversibility with respect to a given stationary vector is desired. We also develop new methods for the Bayesian posterior inference of reversible transition matrices with and without given stationary vector taking into account the need for a suitable prior distribution preserving the meta-stable features of the observed process during posterior inference. All algorithms here are implemented in the PyEMMA software--http://pyemma.org--as of version 2.0.

  19. Parameter Estimation for Thurstone Choice Models

    Energy Technology Data Exchange (ETDEWEB)

    Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-24

    We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.

  20. Final model of multicriterionevaluation of animal welfare

    DEFF Research Database (Denmark)

    Bonde, Marianne; Botreau, R; Bracke, MBM

    One major objective of Welfare Quality® is to propose harmonized methods for the overall assessment of animal welfare on farm and at slaughter that are science based and meet societal concerns. Welfare is a multidimensional concept and its assessment requires measures of different aspects. Welfar......, acceptable welfare and not classified. This evaluation model is tuned according to the views of experts from animal and social sciences, and stakeholders....... Quality® proposes a formal evaluation model whereby the data on animals or their environment are transformed into value scores that reflect compliance with 12 subcriteria and 4 criteria of good welfare. Each animal unit is then allocated to one of four categories: excellent welfare, enhanced welfare...

  1. A guide for estimating dynamic panel models: the macroeconomics models specifiness

    International Nuclear Information System (INIS)

    Coletta, Gaetano

    2005-10-01

    The aim of this paper is to review estimators for dynamic panel data models, a topic in which the interest has grown recently. As a consequence 01 this late interest, different estimation techniques have been proposed in the last few years and, given the last development of the subject, there is still a lack 01 a comprehensive guide for panel data applications, and for macroeconomics panel data models in particular. Finally, we also provide some indications about the Stata software commands to estimate dynamic panel data models with the techniques illustrated in the paper [it

  2. Sparse estimation of polynomial dynamical models

    NARCIS (Netherlands)

    Toth, R.; Hjalmarsson, H.; Rojas, C.R.; Kinnaert, M.

    2012-01-01

    In many practical situations, it is highly desirable to estimate an accurate mathematical model of a real system using as few parameters as possible. This can be motivated either from appealing to a parsimony principle (Occam's razor) or from the view point of the utilization complexity in terms of

  3. A General Model for Estimating Macroevolutionary Landscapes.

    Science.gov (United States)

    Boucher, Florian C; Démery, Vincent; Conti, Elena; Harmon, Luke J; Uyeda, Josef

    2018-03-01

    The evolution of quantitative characters over long timescales is often studied using stochastic diffusion models. The current toolbox available to students of macroevolution is however limited to two main models: Brownian motion and the Ornstein-Uhlenbeck process, plus some of their extensions. Here, we present a very general model for inferring the dynamics of quantitative characters evolving under both random diffusion and deterministic forces of any possible shape and strength, which can accommodate interesting evolutionary scenarios like directional trends, disruptive selection, or macroevolutionary landscapes with multiple peaks. This model is based on a general partial differential equation widely used in statistical mechanics: the Fokker-Planck equation, also known in population genetics as the Kolmogorov forward equation. We thus call the model FPK, for Fokker-Planck-Kolmogorov. We first explain how this model can be used to describe macroevolutionary landscapes over which quantitative traits evolve and, more importantly, we detail how it can be fitted to empirical data. Using simulations, we show that the model has good behavior both in terms of discrimination from alternative models and in terms of parameter inference. We provide R code to fit the model to empirical data using either maximum-likelihood or Bayesian estimation, and illustrate the use of this code with two empirical examples of body mass evolution in mammals. FPK should greatly expand the set of macroevolutionary scenarios that can be studied since it opens the way to estimating macroevolutionary landscapes of any conceivable shape. [Adaptation; bounds; diffusion; FPK model; macroevolution; maximum-likelihood estimation; MCMC methods; phylogenetic comparative data; selection.].

  4. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  5. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  6. Estimating Coastal Digital Elevation Model (DEM) Uncertainty

    Science.gov (United States)

    Amante, C.; Mesick, S.

    2017-12-01

    Integrated bathymetric-topographic digital elevation models (DEMs) are representations of the Earth's solid surface and are fundamental to the modeling of coastal processes, including tsunami, storm surge, and sea-level rise inundation. Deviations in elevation values from the actual seabed or land surface constitute errors in DEMs, which originate from numerous sources, including: (i) the source elevation measurements (e.g., multibeam sonar, lidar), (ii) the interpolative gridding technique (e.g., spline, kriging) used to estimate elevations in areas unconstrained by source measurements, and (iii) the datum transformation used to convert bathymetric and topographic data to common vertical reference systems. The magnitude and spatial distribution of the errors from these sources are typically unknown, and the lack of knowledge regarding these errors represents the vertical uncertainty in the DEM. The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) has developed DEMs for more than 200 coastal communities. This study presents a methodology developed at NOAA NCEI to derive accompanying uncertainty surfaces that estimate DEM errors at the individual cell-level. The development of high-resolution (1/9th arc-second), integrated bathymetric-topographic DEMs along the southwest coast of Florida serves as the case study for deriving uncertainty surfaces. The estimated uncertainty can then be propagated into the modeling of coastal processes that utilize DEMs. Incorporating the uncertainty produces more reliable modeling results, and in turn, better-informed coastal management decisions.

  7. Consistent Estimation of Partition Markov Models

    Directory of Open Access Journals (Sweden)

    Jesús E. García

    2017-04-01

    Full Text Available The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions: what is the minimal number of parameters needed to specify a Markov chain and how to estimate these parameters. In order to answer these questions, we build a consistent strategy for model selection which consist of: giving a size n realization of the process, finding a model within the Partition Markov class, with a minimal number of parts to represent the process law. From the strategy, we derive a measure that establishes a metric in the state space. In addition, we show that if the law of the process is Markovian, then, eventually, when n goes to infinity, L will be retrieved. We show an application to model internet navigation patterns.

  8. Coupling Hydrologic and Hydrodynamic Models to Estimate PMF

    Science.gov (United States)

    Felder, G.; Weingartner, R.

    2015-12-01

    Most sophisticated probable maximum flood (PMF) estimations derive the PMF from the probable maximum precipitation (PMP) by applying deterministic hydrologic models calibrated with observed data. This method is based on the assumption that the hydrological system is stationary, meaning that the system behaviour during the calibration period or the calibration event is presumed to be the same as it is during the PMF. However, as soon as a catchment-specific threshold is reached, the system is no longer stationary. At or beyond this threshold, retention areas, new flow paths, and changing runoff processes can strongly affect downstream peak discharge. These effects can be accounted for by coupling hydrologic and hydrodynamic models, a technique that is particularly promising when the expected peak discharge may considerably exceed the observed maximum discharge. In such cases, the coupling of hydrologic and hydraulic models has the potential to significantly increase the physical plausibility of PMF estimations. This procedure ensures both that the estimated extreme peak discharge does not exceed the physical limit based on riverbed capacity and that the dampening effect of inundation processes on peak discharge is considered. Our study discusses the prospect of considering retention effects on PMF estimations by coupling hydrologic and hydrodynamic models. This method is tested by forcing PREVAH, a semi-distributed deterministic hydrological model, with randomly generated, physically plausible extreme precipitation patterns. The resulting hydrographs are then used to externally force the hydraulic model BASEMENT-ETH (riverbed in 1D, potential inundation areas in 2D). Finally, the PMF estimation results obtained using the coupled modelling approach are compared to the results obtained using ordinary hydrologic modelling.

  9. A single model procedure for tank calibration function estimation

    International Nuclear Information System (INIS)

    York, J.C.; Liebetrau, A.M.

    1995-01-01

    Reliable tank calibrations are a vital component of any measurement control and accountability program for bulk materials in a nuclear reprocessing facility. Tank volume calibration functions used in nuclear materials safeguards and accountability programs are typically constructed from several segments, each of which is estimated independently. Ideally, the segments correspond to structural features in the tank. In this paper the authors use an extension of the Thomas-Liebetrau model to estimate the entire calibration function in a single step. This procedure automatically takes significant run-to-run differences into account and yields an estimate of the entire calibration function in one operation. As with other procedures, the first step is to define suitable calibration segments. Next, a polynomial of low degree is specified for each segment. In contrast with the conventional practice of constructing a separate model for each segment, this information is used to set up the design matrix for a single model that encompasses all of the calibration data. Estimation of the model parameters is then done using conventional statistical methods. The method described here has several advantages over traditional methods. First, modeled run-to-run differences can be taken into account automatically at the estimation step. Second, no interpolation is required between successive segments. Third, variance estimates are based on all the data, rather than that from a single segment, with the result that discontinuities in confidence intervals at segment boundaries are eliminated. Fourth, the restrictive assumption of the Thomas-Liebetrau method, that the measured volumes be the same for all runs, is not required. Finally, the proposed methods are readily implemented using standard statistical procedures and widely-used software packages

  10. Parameter Estimation of Spacecraft Fuel Slosh Model

    Science.gov (United States)

    Gangadharan, Sathya; Sudermann, James; Marlowe, Andrea; Njengam Charles

    2004-01-01

    Fuel slosh in the upper stages of a spinning spacecraft during launch has been a long standing concern for the success of a space mission. Energy loss through the movement of the liquid fuel in the fuel tank affects the gyroscopic stability of the spacecraft and leads to nutation (wobble) which can cause devastating control issues. The rate at which nutation develops (defined by Nutation Time Constant (NTC can be tedious to calculate and largely inaccurate if done during the early stages of spacecraft design. Pure analytical means of predicting the influence of onboard liquids have generally failed. A strong need exists to identify and model the conditions of resonance between nutation motion and liquid modes and to understand the general characteristics of the liquid motion that causes the problem in spinning spacecraft. A 3-D computerized model of the fuel slosh that accounts for any resonant modes found in the experimental testing will allow for increased accuracy in the overall modeling process. Development of a more accurate model of the fuel slosh currently lies in a more generalized 3-D computerized model incorporating masses, springs and dampers. Parameters describing the model include the inertia tensor of the fuel, spring constants, and damper coefficients. Refinement and understanding the effects of these parameters allow for a more accurate simulation of fuel slosh. The current research will focus on developing models of different complexity and estimating the model parameters that will ultimately provide a more realistic prediction of Nutation Time Constant obtained through simulation.

  11. Resource-estimation models and predicted discovery

    International Nuclear Information System (INIS)

    Hill, G.W.

    1982-01-01

    Resources have been estimated by predictive extrapolation from past discovery experience, by analogy with better explored regions, or by inference from evidence of depletion of targets for exploration. Changes in technology and new insights into geological mechanisms have occurred sufficiently often in the long run to form part of the pattern of mature discovery experience. The criterion, that a meaningful resource estimate needs an objective measure of its precision or degree of uncertainty, excludes 'estimates' based solely on expert opinion. This is illustrated by development of error measures for several persuasive models of discovery and production of oil and gas in USA, both annually and in terms of increasing exploration effort. Appropriate generalizations of the models resolve many points of controversy. This is illustrated using two USA data sets describing discovery of oil and of U 3 O 8 ; the latter set highlights an inadequacy of available official data. Review of the oil-discovery data set provides a warrant for adjusting the time-series prediction to a higher resource figure for USA petroleum. (author)

  12. Intermediate-term uranium supply curve estimation. Final report, February 1981

    International Nuclear Information System (INIS)

    Steyn, J.J.; Goudarzi, L.A.; Douglas, R.F.

    1981-02-01

    The study was undertaken to estimate US natural uranium supply capacities and associated economic costs over the intermediate term period, 1979 to 1990, and to develop the general supply outlook to 2000. Annual supply capacity schedules were estimated on an individual mill and mine family (production center) basis. Future production schedules were estimated by balancing estimated supply capacity with DOE future demand projections ranging from 150 GWe to 396 GWe of installed nuclear capacity in 2000; the impact of private sector inventory levels was accounted for. The long run economic cost of production was estimated by applying a specifically derived production cost model to individual mines and mills using available data, as for example, ore depths, grades, production type and capacity, mill recovery; economic assumptions, as for example, 16% return on equity, were also made. The study report presents details of the methodology and data used, and the results obtained; the production center assessment, the supply/cost curves developed, and the cost models used, are also presented. For the current trend outlook, the results indicate that existing and under-construction centers can meet the bulk of demand through 1990, and that new centers will be required to meet substantial shortfalls in the 1990s. The study indicated that the US average long run economic cost of $30 per pound in 1979, in constant dollars, will begin to increase after 1985 at between 4 and 5%

  13. Parameter estimation in fractional diffusion models

    CERN Document Server

    Kubilius, Kęstutis; Ralchenko, Kostiantyn

    2017-01-01

    This book is devoted to parameter estimation in diffusion models involving fractional Brownian motion and related processes. For many years now, standard Brownian motion has been (and still remains) a popular model of randomness used to investigate processes in the natural sciences, financial markets, and the economy. The substantial limitation in the use of stochastic diffusion models with Brownian motion is due to the fact that the motion has independent increments, and, therefore, the random noise it generates is “white,” i.e., uncorrelated. However, many processes in the natural sciences, computer networks and financial markets have long-term or short-term dependences, i.e., the correlations of random noise in these processes are non-zero, and slowly or rapidly decrease with time. In particular, models of financial markets demonstrate various kinds of memory and usually this memory is modeled by fractional Brownian diffusion. Therefore, the book constructs diffusion models with memory and provides s...

  14. A note on the conditional density estimate in single functional index model

    OpenAIRE

    2010-01-01

    Abstract In this paper, we consider estimation of the conditional density of a scalar response variable Y given a Hilbertian random variable X when the observations are linked with a single-index structure. We establish the pointwise and the uniform almost complete convergence (with the rate) of the kernel estimate of this model. As an application, we show how our result can be applied in the prediction problem via the conditional mode estimate. Finally, the estimation of the funct...

  15. Adaptive Estimation of Heteroscedastic Money Demand Model of Pakistan

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam

    2007-07-01

    Full Text Available For the problem of estimation of Money demand model of Pakistan, money supply (M1 shows heteroscedasticity of the unknown form. For estimation of such model we compare two adaptive estimators with ordinary least squares estimator and show the attractive performance of the adaptive estimators, namely, nonparametric kernel estimator and nearest neighbour regression estimator. These comparisons are made on the basis standard errors of the estimated coefficients, standard error of regression, Akaike Information Criteria (AIC value, and the Durban-Watson statistic for autocorrelation. We further show that nearest neighbour regression estimator performs better when comparing with the other nonparametric kernel estimator.

  16. Macroeconomic Forecasts in Models with Bayesian Averaging of Classical Estimates

    Directory of Open Access Journals (Sweden)

    Piotr Białowolski

    2012-03-01

    Full Text Available The aim of this paper is to construct a forecasting model oriented on predicting basic macroeconomic variables, namely: the GDP growth rate, the unemployment rate, and the consumer price inflation. In order to select the set of the best regressors, Bayesian Averaging of Classical Estimators (BACE is employed. The models are atheoretical (i.e. they do not reflect causal relationships postulated by the macroeconomic theory and the role of regressors is played by business and consumer tendency survey-based indicators. Additionally, survey-based indicators are included with a lag that enables to forecast the variables of interest (GDP, unemployment, and inflation for the four forthcoming quarters without the need to make any additional assumptions concerning the values of predictor variables in the forecast period.  Bayesian Averaging of Classical Estimators is a method allowing for full and controlled overview of all econometric models which can be obtained out of a particular set of regressors. In this paper authors describe the method of generating a family of econometric models and the procedure for selection of a final forecasting model. Verification of the procedure is performed by means of out-of-sample forecasts of main economic variables for the quarters of 2011. The accuracy of the forecasts implies that there is still a need to search for new solutions in the atheoretical modelling.

  17. Robust estimation of hydrological model parameters

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-11-01

    Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.

  18. Washington State Nursing Home Administrator Model Curriculum. Final Report.

    Science.gov (United States)

    Cowan, Florence Kelly

    The course outlines presented in this final report comprise a proposed Fort Steilacoom Community College curriculum to be used as a statewide model two-year associate degree curriculum for nursing home administrators. The eight courses described are introduction to nursing, home administration, financial management of nursing homes, nursing home…

  19. Final Report Fermionic Symmetries and Self consistent Shell Model

    International Nuclear Information System (INIS)

    Zamick, Larry

    2008-01-01

    In this final report in the field of theoretical nuclear physics we note important accomplishments.We were confronted with 'anomoulous' magnetic moments by the experimetalists and were able to expain them. We found unexpected partial dynamical symmetries--completely unknown before, and were able to a large extent to expain them. The importance of a self consistent shell model was emphasized.

  20. Model validation studies of solar systems, Phase III. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Lantz, L.J.; Winn, C.B.

    1978-12-01

    Results obtained from a validation study of the TRNSYS, SIMSHAC, and SOLCOST solar system simulation and design are presented. Also included are comparisons between the FCHART and SOLCOST solar system design programs and some changes that were made to the SOLCOST program. Finally, results obtained from the analysis of several solar radiation models are presented. Separate abstracts were prepared for ten papers.

  1. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    Science.gov (United States)

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Capital requirements for the transportation of energy materials: 1979 ARC estimates. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1979-09-28

    TERA's estimates of capital requirements to transport natural gas, crude oil, petroleum products, and coal in the US by 1990 are presented. It is a continuation of a 1978 study (EAPA 5:3946) to perform a similar analysis on 1979 scenarios. Scenarios B, C, and D from the EIA's Mid-range Energy Forecasting Systems, as used in the 1979 Annual Report to Congress (ARC), were provided as a basis for the analysis and represent three alternative futures. Summaries of transportation investment requirements through 1990 are given for Scenarios B, C, and D. Total investment requirements for the three models (pipelines, railroads, waterways) and the three energy commodities (coal, petroleum, petroleum products, natural gas) are estimated to range between $35.3 and $42.7 billion by 1990 depending on the scenario.

  3. AMEM-ADL Polymer Migration Estimation Model User's Guide

    Science.gov (United States)

    The user's guide of the Arthur D. Little Polymer Migration Estimation Model (AMEM) provides the information on how the model estimates the fraction of a chemical additive that diffuses through polymeric matrices.

  4. Benefit Estimation Model for Tourist Spaceflights

    Science.gov (United States)

    Goehlich, Robert A.

    2003-01-01

    It is believed that the only potential means for significant reduction of the recurrent launch cost, which results in a stimulation of human space colonization, is to make the launcher reusable, to increase its reliability, and to make it suitable for new markets such as mass space tourism. But such space projects, that have long range aspects are very difficult to finance, because even politicians would like to see a reasonable benefit during their term in office, because they want to be able to explain this investment to the taxpayer. This forces planners to use benefit models instead of intuitive judgement to convince sceptical decision-makers to support new investments in space. Benefit models provide insights into complex relationships and force a better definition of goals. A new approach is introduced in the paper that allows to estimate the benefits to be expected from a new space venture. The main objective why humans should explore space is determined in this study to ``improve the quality of life''. This main objective is broken down in sub objectives, which can be analysed with respect to different interest groups. Such interest groups are the operator of a space transportation system, the passenger, and the government. For example, the operator is strongly interested in profit, while the passenger is mainly interested in amusement, while the government is primarily interested in self-esteem and prestige. This leads to different individual satisfactory levels, which are usable for the optimisation process of reusable launch vehicles.

  5. Estimation of Potential Population Level Effects of Contaminants on Wildlife; FINAL

    International Nuclear Information System (INIS)

    Loar, J.M.

    2001-01-01

    an d development of new scaling models; (2) development of dose-response models for toxicity data presented in the literature; and (3) development of matrix-based population models that were coupled with dose-response models to provide realistic estimation of population-level effects for individual responses

  6. Estimation of spatial uncertainties of tomographic velocity models

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, M.; Du, Z.; Querendez, E. [SINTEF Petroleum Research, Trondheim (Norway)

    2012-12-15

    This research project aims to evaluate the possibility of assessing the spatial uncertainties in tomographic velocity model building in a quantitative way. The project is intended to serve as a test of whether accurate and specific uncertainty estimates (e.g., in meters) can be obtained. The project is based on Monte Carlo-type perturbations of the velocity model as obtained from the tomographic inversion guided by diagonal and off-diagonal elements of the resolution and the covariance matrices. The implementation and testing of this method was based on the SINTEF in-house stereotomography code, using small synthetic 2D data sets. To test the method the calculation and output of the covariance and resolution matrices was implemented, and software to perform the error estimation was created. The work included the creation of 2D synthetic data sets, the implementation and testing of the software to conduct the tests (output of the covariance and resolution matrices which are not implicitly provided by stereotomography), application to synthetic data sets, analysis of the test results, and creating the final report. The results show that this method can be used to estimate the spatial errors in tomographic images quantitatively. The results agree with' the known errors for our synthetic models. However, the method can only be applied to structures in the model, where the change of seismic velocity is larger than the predicted error of the velocity parameter amplitudes. In addition, the analysis is dependent on the tomographic method, e.g., regularization and parameterization. The conducted tests were very successful and we believe that this method could be developed further to be applied to third party tomographic images.

  7. Dynamic Diffusion Estimation in Exponential Family Models

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Sečkárová, Vladimíra

    2013-01-01

    Roč. 20, č. 11 (2013), s. 1114-1117 ISSN 1070-9908 R&D Projects: GA MŠk 7D12004; GA ČR GA13-13502S Keywords : diffusion estimation * distributed estimation * paremeter estimation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.639, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/dedecius-0396518.pdf

  8. UAV State Estimation Modeling Techniques in AHRS

    Science.gov (United States)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  9. Administration and transaction cost estimates for a greenhouse gas offset system : final report

    International Nuclear Information System (INIS)

    2004-01-01

    Canada's Climate Change Plan provides large final emitters (LFEs) with the option to meet their emission targets through the purchase of domestic greenhouse gas (GHG) offset credits. This paper presents the results of a study which identified and estimated transaction costs associated with an offset system. The cost to both proponents and governments were identified. The study also suggested ways to reduce administration and transaction costs through design options. The study considered projects involving agriculture, forests, landfill gas capture, renewable energy and energy efficiency within a potential domestic offset system. It was determined that average transaction costs per tonne range from $19 per tonne to $0.05 depending on the design choice and project type. Total administration costs did not vary more than 5 per cent between different design choices. The total system costs, which are the combination of all transaction and administration costs, are the best indicator for the potential of a project. Eight case studies were examined and costs per tonne were presented. According to the results, the best opportunities to reduce both transaction and administration costs are to choose a broad approach to baselines, boundaries and quantification; and, to allow pooling in the forestry and agriculture sectors. Transaction costs can be lowered further by reducing the frequency of monitoring and verification and allowing pre-2008 crediting. refs., tabs., figs

  10. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2011-01-01

    In this paper, two non-parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel-based approaches. The second estimator

  11. Performances of some estimators of linear model with ...

    African Journals Online (AJOL)

    The estimators are compared by examing the finite properties of estimators namely; sum of biases, sum of absolute biases, sum of variances and sum of the mean squared error of the estimated parameter of the model. Results show that when the autocorrelation level is small (ρ=0.4), the MLGD estimator is best except when ...

  12. On population size estimators in the Poisson mixture model.

    Science.gov (United States)

    Mao, Chang Xuan; Yang, Nan; Zhong, Jinhua

    2013-09-01

    Estimating population sizes via capture-recapture experiments has enormous applications. The Poisson mixture model can be adopted for those applications with a single list in which individuals appear one or more times. We compare several nonparametric estimators, including the Chao estimator, the Zelterman estimator, two jackknife estimators and the bootstrap estimator. The target parameter of the Chao estimator is a lower bound of the population size. Those of the other four estimators are not lower bounds, and they may produce lower confidence limits for the population size with poor coverage probabilities. A simulation study is reported and two examples are investigated. © 2013, The International Biometric Society.

  13. Parameter Estimates in Differential Equation Models for Chemical Kinetics

    Science.gov (United States)

    Winkel, Brian

    2011-01-01

    We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…

  14. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  15. Efficient estimation of semiparametric copula models for bivariate survival data

    KAUST Repository

    Cheng, Guang

    2014-01-01

    A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.

  16. Reassessment of the technical bases for estimating source terms. Final report

    International Nuclear Information System (INIS)

    Silberberg, M.; Mitchell, J.A.; Meyer, R.O.; Ryder, C.P.

    1986-07-01

    This document describes a major advance in the technology for calculating source terms from postulated accidents at US light-water reactors. The improved technology consists of (1) an extensive data base from severe accident research programs initiated following the TMI accident, (2) a set of coupled and integrated computer codes (the Source Term Code Package), which models key aspects of fission product behavior under severe accident conditions, and (3) a number of detailed mechanistic codes that bridge the gap between the data base and the Source Term Code Package. The improved understanding of severe accident phenonmena has also allowed an identification of significant sources of uncertainty, which should be considered in estimating source terms. These sources of uncertainty are also described in this document. The current technology provides a significant improvement in evaluating source terms over that available at the time of the Reactor Safety Study (WASH-1400) and, because of this significance, the Nuclear Regulatory Commission staff is recommending its use

  17. Mathematical model of transmission network static state estimation

    Directory of Open Access Journals (Sweden)

    Ivanov Aleksandar

    2012-01-01

    Full Text Available In this paper the characteristics and capabilities of the power transmission network static state estimator are presented. The solving process of the mathematical model containing the measurement errors and their processing is developed. To evaluate difference between the general model of state estimation and the fast decoupled state estimation model, the both models are applied to an example, and so derived results are compared.

  18. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Maity, Arnab; Carroll, Raymond J.

    2013-01-01

    PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus

  19. Stochastic linear hybrid systems: Modeling, estimation, and application

    Science.gov (United States)

    Seah, Chze Eng

    Hybrid systems are dynamical systems which have interacting continuous state and discrete state (or mode). Accurate modeling and state estimation of hybrid systems are important in many applications. We propose a hybrid system model, known as the Stochastic Linear Hybrid System (SLHS), to describe hybrid systems with stochastic linear system dynamics in each mode and stochastic continuous-state-dependent mode transitions. We then develop a hybrid estimation algorithm, called the State-Dependent-Transition Hybrid Estimation (SDTHE) algorithm, to estimate the continuous state and discrete state of the SLHS from noisy measurements. It is shown that the SDTHE algorithm is more accurate or more computationally efficient than existing hybrid estimation algorithms. Next, we develop a performance analysis algorithm to evaluate the performance of the SDTHE algorithm in a given operating scenario. We also investigate sufficient conditions for the stability of the SDTHE algorithm. The proposed SLHS model and SDTHE algorithm are illustrated to be useful in several applications. In Air Traffic Control (ATC), to facilitate implementations of new efficient operational concepts, accurate modeling and estimation of aircraft trajectories are needed. In ATC, an aircraft's trajectory can be divided into a number of flight modes. Furthermore, as the aircraft is required to follow a given flight plan or clearance, its flight mode transitions are dependent of its continuous state. However, the flight mode transitions are also stochastic due to navigation uncertainties or unknown pilot intents. Thus, we develop an aircraft dynamics model in ATC based on the SLHS. The SDTHE algorithm is then used in aircraft tracking applications to estimate the positions/velocities of aircraft and their flight modes accurately. Next, we develop an aircraft conformance monitoring algorithm to detect any deviations of aircraft trajectories in ATC that might compromise safety. In this application, the SLHS

  20. A Probabilistic Cost Estimation Model for Unexploded Ordnance Removal

    National Research Council Canada - National Science Library

    Poppe, Peter

    1999-01-01

    ...) contaminated sites that the services must decontaminate. Existing models for estimating the cost of UXO removal often require a high level of expertise and provide only a point estimate for the costs...

  1. Reserves' potential of sedimentary basin: modeling and estimation; Potentiel de reserves d'un bassin petrolier: modelisation et estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lepez, V

    2002-12-01

    The aim of this thesis is to build a statistical model of oil and gas fields' sizes distribution in a given sedimentary basin, for both the fields that exist in:the subsoil and those which have already been discovered. The estimation of all the parameters of the model via estimation of the density of the observations by model selection of piecewise polynomials by penalized maximum likelihood techniques enables to provide estimates of the total number of fields which are yet to be discovered, by class of size. We assume that the set of underground fields' sizes is an i.i.d. sample of unknown population with Levy-Pareto law with unknown parameter. The set of already discovered fields is a sub-sample without replacement from the previous which is 'size-biased'. The associated inclusion probabilities are to be estimated. We prove that the probability density of the observations is the product of the underlying density and of an unknown weighting function representing the sampling bias. An arbitrary partition of the sizes' interval being set (called a model), the analytical solutions of likelihood maximization enables to estimate both the parameter of the underlying Levy-Pareto law and the weighting function, which is assumed to be piecewise constant and based upon the partition. We shall add a monotonousness constraint over the latter, taking into account the fact that the bigger a field, the higher its probability of being discovered. Horvitz-Thompson-like estimators finally give the conclusion. We then allow our partitions to vary inside several classes of models and prove a model selection theorem which aims at selecting the best partition within a class, in terms of both Kuilback and Hellinger risk of the associated estimator. We conclude by simulations and various applications to real data from sedimentary basins of four continents, in order to illustrate theoretical as well as practical aspects of our model. (author)

  2. Estimation of Stochastic Volatility Models by Nonparametric Filtering

    DEFF Research Database (Denmark)

    Kanaya, Shin; Kristensen, Dennis

    2016-01-01

    /estimated volatility process replacing the latent process. Our estimation strategy is applicable to both parametric and nonparametric stochastic volatility models, and can handle both jumps and market microstructure noise. The resulting estimators of the stochastic volatility model will carry additional biases...... and variances due to the first-step estimation, but under regularity conditions we show that these vanish asymptotically and our estimators inherit the asymptotic properties of the infeasible estimators based on observations of the volatility process. A simulation study examines the finite-sample properties...

  3. Volatility estimation using a rational GARCH model

    Directory of Open Access Journals (Sweden)

    Tetsuya Takaishi

    2018-03-01

    Full Text Available The rational GARCH (RGARCH model has been proposed as an alternative GARCHmodel that captures the asymmetric property of volatility. In addition to the previously proposedRGARCH model, we propose an alternative RGARCH model called the RGARCH-Exp model thatis more stable when dealing with outliers. We measure the performance of the volatility estimationby a loss function calculated using realized volatility as a proxy for true volatility and compare theRGARCH-type models with other asymmetric type models such as the EGARCH and GJR models.We conduct empirical studies of six stocks on the Tokyo Stock Exchange and find that a volatilityestimation using the RGARCH-type models outperforms the GARCH model and is comparable toother asymmetric GARCH models.

  4. Estimation of curve number by DAWAST model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tai Cheol; Park, Seung Ki; Moon, Jong Pil [Chungnam National University, Taejon (Korea, Republic of)

    1997-10-31

    It is one of the most important factors to determine the effective rainfall for estimation of flood hydrograph in design schedule. SCS curve number (CN) method has been frequently used to estimate the effective rainfall of synthesized design flood hydrograph for hydraulic structures. But, it should be cautious to apply SCS-CN originally developed in U.S.A to watersheds in Korea, because characteristics of watersheds in Korea and cropping patterns especially like a paddy land cultivation are quite different from those in USA. New CN method has been introduced. Maximum storage capacity which was herein defined as U{sub max} can be calibrated from the stream flow data and converted to new CN-I of driest condition of soil moisture in the given watershed. Effective rainfall for design flood hydrograph can be estimated by the curve number developed in the watersheds in Korea. (author). 14 refs., 5 tabs., 3 figs.

  5. Efficient and robust estimation for longitudinal mixed models for binary data

    DEFF Research Database (Denmark)

    Holst, René

    2009-01-01

    This paper proposes a longitudinal mixed model for binary data. The model extends the classical Poisson trick, in which a binomial regression is fitted by switching to a Poisson framework. A recent estimating equations method for generalized linear longitudinal mixed models, called GEEP, is used...... as a vehicle for fitting the conditional Poisson regressions, given a latent process of serial correlated Tweedie variables. The regression parameters are estimated using a quasi-score method, whereas the dispersion and correlation parameters are estimated by use of bias-corrected Pearson-type estimating...... equations, using second moments only. Random effects are predicted by BLUPs. The method provides a computationally efficient and robust approach to the estimation of longitudinal clustered binary data and accommodates linear and non-linear models. A simulation study is used for validation and finally...

  6. Lag space estimation in time series modelling

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1997-01-01

    The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...

  7. Early mortality estimates for different nuclear accidents. Final Phase I report, October 1977-April 1979

    Energy Technology Data Exchange (ETDEWEB)

    Hahn, F.F.

    1979-08-01

    Several studies have previously been made of the number of early deaths which might be expected in a population exposed to a cloud of radionuclides which could result from a nuclear accident. These analyses, however, have been limited to one accident scenario or to exposures involving limited numbers of radionuclides. The purpose of this Phase I study was to examine the existing data on the early health effects of inhaled radioactive materials and determined what, if any, new studies were needed to make reasonable estimates of early mortality after exposure of a population to a cloud of radionuclides of any type. The approach used in the Phase I project was to analyze the data bases available on the health effects of inhaled radioactive materials and document those which were adequate and useful. Using these data, a computer based simulation model was developed depicting exposure to a radioactive aerosol, the dose to an individual exposed to the aerosol and the probability of dying from early effects.

  8. Early mortality estimates for different nuclear accidents. Final Phase I report, October 1977-April 1979

    International Nuclear Information System (INIS)

    Hahn, F.F.

    1979-08-01

    Several studies have previously been made of the number of early deaths which might be expected in a population exposed to a cloud of radionuclides which could result from a nuclear accident. These analyses, however, have been limited to one accident scenario or to exposures involving limited numbers of radionuclides. The purpose of this Phase I study was to examine the existing data on the early health effects of inhaled radioactive materials and determined what, if any, new studies were needed to make reasonable estimates of early mortality after exposure of a population to a cloud of radionuclides of any type. The approach used in the Phase I project was to analyze the data bases available on the health effects of inhaled radioactive materials and document those which were adequate and useful. Using these data, a computer based simulation model was developed depicting exposure to a radioactive aerosol, the dose to an individual exposed to the aerosol and the probability of dying from early effects

  9. Extreme Quantile Estimation in Binary Response Models

    Science.gov (United States)

    1990-03-01

    in Cancer Research," Biometria , VoL 66, pp. 307-316. Hsi, B.P. [1969], ’The Multiple Sample Up-and-Down Method in Bioassay," Journal of the American...New Method of Estimation," Biometria , VoL 53, pp. 439-454. Wetherill, G.B. [1976], Sequential Methods in Statistics, London: Chapman and Hall. Wu, C.FJ

  10. Robust Estimation and Forecasting of the Capital Asset Pricing Model

    NARCIS (Netherlands)

    G. Bian (Guorui); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2013-01-01

    textabstractIn this paper, we develop a modified maximum likelihood (MML) estimator for the multiple linear regression model with underlying student t distribution. We obtain the closed form of the estimators, derive the asymptotic properties, and demonstrate that the MML estimator is more

  11. Robust Estimation and Forecasting of the Capital Asset Pricing Model

    NARCIS (Netherlands)

    G. Bian (Guorui); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2010-01-01

    textabstractIn this paper, we develop a modified maximum likelihood (MML) estimator for the multiple linear regression model with underlying student t distribution. We obtain the closed form of the estimators, derive the asymptotic properties, and demonstrate that the MML estimator is more

  12. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2009-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  13. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2010-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  14. Performances Of Estimators Of Linear Models With Autocorrelated ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with Autocorrelated error terms are compared when the independent variable is autoregressive. The results reveal that the properties of the estimators when the sample size is finite is quite similar to the properties of the estimators when the sample size is infinite although ...

  15. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  16. Crash data modeling with a generalized estimator.

    Science.gov (United States)

    Ye, Zhirui; Xu, Yueru; Lord, Dominique

    2018-05-11

    The investigation of relationships between traffic crashes and relevant factors is important in traffic safety management. Various methods have been developed for modeling crash data. In real world scenarios, crash data often display the characteristics of over-dispersion. However, on occasions, some crash datasets have exhibited under-dispersion, especially in cases where the data are conditioned upon the mean. The commonly used models (such as the Poisson and the NB regression models) have associated limitations to cope with various degrees of dispersion. In light of this, a generalized event count (GEC) model, which can be generally used to handle over-, equi-, and under-dispersed data, is proposed in this study. This model was first applied to case studies using data from Toronto, characterized by over-dispersion, and then to crash data from railway-highway crossings in Korea, characterized with under-dispersion. The results from the GEC model were compared with those from the Negative binomial and the hyper-Poisson models. The cases studies show that the proposed model provides good performance for crash data characterized with over- and under-dispersion. Moreover, the proposed model simplifies the modeling process and the prediction of crash data. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Modeling and Parameter Estimation of a Small Wind Generation System

    Directory of Open Access Journals (Sweden)

    Carlos A. Ramírez Gómez

    2013-11-01

    Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.

  18. Applicability of models to estimate traffic noise for urban roads.

    Science.gov (United States)

    Melo, Ricardo A; Pimentel, Roberto L; Lacerda, Diego M; Silva, Wekisley M

    2015-01-01

    Traffic noise is a highly relevant environmental impact in cities. Models to estimate traffic noise, in turn, can be useful tools to guide mitigation measures. In this paper, the applicability of models to estimate noise levels produced by a continuous flow of vehicles on urban roads is investigated. The aim is to identify which models are more appropriate to estimate traffic noise in urban areas since several models available were conceived to estimate noise from highway traffic. First, measurements of traffic noise, vehicle count and speed were carried out in five arterial urban roads of a brazilian city. Together with geometric measurements of width of lanes and distance from noise meter to lanes, these data were input in several models to estimate traffic noise. The predicted noise levels were then compared to the respective measured counterparts for each road investigated. In addition, a chart showing mean differences in noise between estimations and measurements is presented, to evaluate the overall performance of the models. Measured Leq values varied from 69 to 79 dB(A) for traffic flows varying from 1618 to 5220 vehicles/h. Mean noise level differences between estimations and measurements for all urban roads investigated ranged from -3.5 to 5.5 dB(A). According to the results, deficiencies of some models are discussed while other models are identified as applicable to noise estimations on urban roads in a condition of continuous flow. Key issues to apply such models to urban roads are highlighted.

  19. Conditional density estimation using fuzzy GARCH models

    NARCIS (Netherlands)

    Almeida, R.J.; Bastürk, N.; Kaymak, U.; Costa Sousa, da J.M.; Kruse, R.; Berthold, M.R.; Moewes, C.; Gil, M.A.; Grzegorzewski, P.; Hryniewicz, O.

    2013-01-01

    Abstract. Time series data exhibits complex behavior including non-linearity and path-dependency. This paper proposes a flexible fuzzy GARCH model that can capture different properties of data, such as skewness, fat tails and multimodality in one single model. Furthermore, additional information and

  20. Microwave modeling of laser plasma interactions. Final report

    International Nuclear Information System (INIS)

    1983-08-01

    For a large laser fusion targets and nanosecond pulse lengths, stimulated Brillouin scattering (SBS) and self-focusing are expected to be significant problems. The goal of the contractual effort was to examine certain aspects of these physical phenomena in a wavelength regime (lambda approx.5 cm) more amenable to detailed diagnostics than that characteristic of laser fusion (lambda approx.1 micron). The effort was to include the design, fabrication and operation of a suitable experimental apparatus. In addition, collaboration with Dr. Neville Luhmann and his associates at UCLA and with Dr. Curt Randall of LLNL, on analysis and modelling of the UCLA experiments was continued. Design and fabrication of the TRW experiment is described under ''Experiment Design'' and ''Experimental Apparatus''. The design goals for the key elements of the experimental apparatus were met, but final integration and operation of the experiment was not accomplished. Some theoretical considerations on the interaction between Stimulated Brillouin Scattering and Self-Focusing are also presented

  1. Diagnostic modeling of the ARM experimental configuration. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Somerville, R.C.J.

    1998-04-01

    A major accomplishment of this work was to demonstrate the viability of using in-situ data in both mid-continent North America (SGP CART site) and Tropical Western Pacific (TOGA-COARE) locations to provide the horizontal advective flux convergences which force and constrain the Single-Column Model (SCM) which was the main theoretical tool of this work. The author has used TOGA-COARE as a prototype for the ARM TWP site. Results show that SCMs can produce realistic budgets over the ARM sites without relying on parameterization-dependent operational numerical weather prediction objective analyses. The single-column model is diagnostic rather than prognostic. It is numerically integrated in time as an initial value problem which is forced and constrained by observational data. The input is an observed initial state, plus observationally derived estimates of the time-dependent advection terms in the conservation equations, provided at all model layers. Its output is a complete heat and water budget, including temperature and moisture profiles, clouds and their radiative properties, diabatic heating terms, surface energy balance components, and hydrologic cycle elements, all specified as functions of time. These SCM results should be interpreted in light of the original motivation and purpose of ARM and its goal to improve the treatment of cloud-radiation interactions in climate models.

  2. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  3. A nonparametric mixture model for cure rate estimation.

    Science.gov (United States)

    Peng, Y; Dear, K B

    2000-03-01

    Nonparametric methods have attracted less attention than their parametric counterparts for cure rate analysis. In this paper, we study a general nonparametric mixture model. The proportional hazards assumption is employed in modeling the effect of covariates on the failure time of patients who are not cured. The EM algorithm, the marginal likelihood approach, and multiple imputations are employed to estimate parameters of interest in the model. This model extends models and improves estimation methods proposed by other researchers. It also extends Cox's proportional hazards regression model by allowing a proportion of event-free patients and investigating covariate effects on that proportion. The model and its estimation method are investigated by simulations. An application to breast cancer data, including comparisons with previous analyses using a parametric model and an existing nonparametric model by other researchers, confirms the conclusions from the parametric model but not those from the existing nonparametric model.

  4. Final report on cost estimate of forward superconducting air core toroid

    International Nuclear Information System (INIS)

    Fields, T.

    1992-12-01

    An independent cost-estimate for key components of the forward superconducting air core toroid (ACT) was obtained in May 1992 from an experienced manufacturer of large cryogenic vessels. This new cost estimate is summarized in this report. It implies that a suitably designed ACT may have a cost which is approximately equal to that of the presently designed SDC forward iron core toroid

  5. A simulation of water pollution model parameter estimation

    Science.gov (United States)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  6. Estimating High-Dimensional Time Series Models

    DEFF Research Database (Denmark)

    Medeiros, Marcelo C.; Mendes, Eduardo F.

    We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...

  7. Optimal covariance selection for estimation using graphical models

    OpenAIRE

    Vichik, Sergey; Oshman, Yaakov

    2011-01-01

    We consider a problem encountered when trying to estimate a Gaussian random field using a distributed estimation approach based on Gaussian graphical models. Because of constraints imposed by estimation tools used in Gaussian graphical models, the a priori covariance of the random field is constrained to embed conditional independence constraints among a significant number of variables. The problem is, then: given the (unconstrained) a priori covariance of the random field, and the conditiona...

  8. Temporal rainfall estimation using input data reduction and model inversion

    Science.gov (United States)

    Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.

    2016-12-01

    Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a

  9. Estimating Lead (Pb) Bioavailability In A Mouse Model

    Science.gov (United States)

    Children are exposed to Pb through ingestion of Pb-contaminated soil. Soil Pb bioavailability is estimated using animal models or with chemically defined in vitro assays that measure bioaccessibility. However, bioavailability estimates in a large animal model (e.g., swine) can be...

  10. Estimating Dynamic Equilibrium Models using Macro and Financial Data

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper; Posch, Olaf; van der Wel, Michel

    We show that including financial market data at daily frequency, along with macro series at standard lower frequency, facilitates statistical inference on structural parameters in dynamic equilibrium models. Our continuous-time formulation conveniently accounts for the difference in observation...... of the estimators and estimate the model using 20 years of U.S. macro and financial data....

  11. mathematical models for estimating radio channels utilization

    African Journals Online (AJOL)

    2017-08-08

    Aug 8, 2017 ... Mathematical models for radio channels utilization assessment by real-time flows transfer in ... data transmission networks application having dynamic topology ..... Journal of Applied Mathematics and Statistics, 56(2): 85–90.

  12. Linear Regression Models for Estimating True Subsurface ...

    Indian Academy of Sciences (India)

    47

    The objective is to minimize the processing time and computer memory required. 10 to carry out inversion .... to the mainland by two long bridges. .... term. In this approach, the model converges when the squared sum of the differences. 143.

  13. An Estimation of Construction and Demolition Debris in Seoul, Korea: Waste Amount, Type, and Estimating Model.

    Science.gov (United States)

    Seo, Seongwon; Hwang, Yongwoo

    1999-08-01

    Construction and demolition (C&D) debris is generated at the site of various construction activities. However, the amount of the debris is usually so large that it is necessary to estimate the amount of C&D debris as accurately as possible for effective waste management and control in urban areas. In this paper, an effective estimation method using a statistical model was proposed. The estimation process was composed of five steps: estimation of the life span of buildings; estimation of the floor area of buildings to be constructed and demolished; calculation of individual intensity units of C&D debris; and estimation of the future C&D debris production. This method was also applied in the city of Seoul as an actual case, and the estimated amount of C&D debris in Seoul in 2021 was approximately 24 million tons. Of this total amount, 98% was generated by demolition, and the main components of debris were concrete and brick.

  14. Parameter and State Estimator for State Space Models

    Directory of Open Access Journals (Sweden)

    Ruifeng Ding

    2014-01-01

    Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.

  15. Traffic congestion forecasting model for the INFORM System. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Azarm, A.; Mughabghab, S.; Stock, D.

    1995-05-01

    This report describes a computerized traffic forecasting model, developed by Brookhaven National Laboratory (BNL) for a portion of the Long Island INFORM Traffic Corridor. The model has gone through a testing phase, and currently is able to make accurate traffic predictions up to one hour forward in time. The model will eventually take on-line traffic data from the INFORM system roadway sensors and make projections as to future traffic patterns, thus allowing operators at the New York State Department of Transportation (D.O.T.) INFORM Traffic Management Center to more optimally manage traffic. It can also form the basis of a travel information system. The BNL computer model developed for this project is called ATOP for Advanced Traffic Occupancy Prediction. The various modules of the ATOP computer code are currently written in Fortran and run on PC computers (pentium machine) faster than real time for the section of the INFORM corridor under study. The following summarizes the various routines currently contained in the ATOP code: Statistical forecasting of traffic flow and occupancy using historical data for similar days and time (long term knowledge), and the recent information from the past hour (short term knowledge). Estimation of the empirical relationships between traffic flow and occupancy using long and short term information. Mechanistic interpolation using macroscopic traffic models and based on the traffic flow and occupancy forecasted (item-1), and the empirical relationships (item-2) for the specific highway configuration at the time of simulation (construction, lane closure, etc.). Statistical routine for detection and classification of anomalies and their impact on the highway capacity which are fed back to previous items.

  16. GLUE Based Uncertainty Estimation of Urban Drainage Modeling Using Weather Radar Precipitation Estimates

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.

    2011-01-01

    Distributed weather radar precipitation measurements are used as rainfall input for an urban drainage model, to simulate the runoff from a small catchment of Denmark. It is demonstrated how the Generalized Likelihood Uncertainty Estimation (GLUE) methodology can be implemented and used to estimate...

  17. Ballistic model to estimate microsprinkler droplet distribution

    Directory of Open Access Journals (Sweden)

    Conceição Marco Antônio Fonseca

    2003-01-01

    Full Text Available Experimental determination of microsprinkler droplets is difficult and time-consuming. This determination, however, could be achieved using ballistic models. The present study aimed to compare simulated and measured values of microsprinkler droplet diameters. Experimental measurements were made using the flour method, and simulations using a ballistic model adopted by the SIRIAS computational software. Drop diameters quantified in the experiment varied between 0.30 mm and 1.30 mm, while the simulated between 0.28 mm and 1.06 mm. The greatest differences between simulated and measured values were registered at the highest radial distance from the emitter. The model presented a performance classified as excellent for simulating microsprinkler drop distribution.

  18. Transit forecasting accuracy : ridership forecasts and capital cost estimates, final research report.

    Science.gov (United States)

    2009-01-01

    In 1992, Pickrell published a seminal piece examining the accuracy of ridership forecasts and capital cost estimates for fixed-guideway transit systems in the US. His research created heated discussions in the transit industry regarding the ability o...

  19. Estimation of some stochastic models used in reliability engineering

    International Nuclear Information System (INIS)

    Huovinen, T.

    1989-04-01

    The work aims to study the estimation of some stochastic models used in reliability engineering. In reliability engineering continuous probability distributions have been used as models for the lifetime of technical components. We consider here the following distributions: exponential, 2-mixture exponential, conditional exponential, Weibull, lognormal and gamma. Maximum likelihood method is used to estimate distributions from observed data which may be either complete or censored. We consider models based on homogeneous Poisson processes such as gamma-poisson and lognormal-poisson models for analysis of failure intensity. We study also a beta-binomial model for analysis of failure probability. The estimators of the parameters for three models are estimated by the matching moments method and in the case of gamma-poisson and beta-binomial models also by maximum likelihood method. A great deal of mathematical or statistical problems that arise in reliability engineering can be solved by utilizing point processes. Here we consider the statistical analysis of non-homogeneous Poisson processes to describe the failing phenomena of a set of components with a Weibull intensity function. We use the method of maximum likelihood to estimate the parameters of the Weibull model. A common cause failure can seriously reduce the reliability of a system. We consider a binomial failure rate (BFR) model as an application of the marked point processes for modelling common cause failure in a system. The parameters of the binomial failure rate model are estimated with the maximum likelihood method

  20. Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.

    Science.gov (United States)

    Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi

    2017-12-01

    We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.

  1. Cokriging model for estimation of water table elevation

    International Nuclear Information System (INIS)

    Hoeksema, R.J.; Clapp, R.B.; Thomas, A.L.; Hunley, A.E.; Farrow, N.D.; Dearstone, K.C.

    1989-01-01

    In geological settings where the water table is a subdued replica of the ground surface, cokriging can be used to estimate the water table elevation at unsampled locations on the basis of values of water table elevation and ground surface elevation measured at wells and at points along flowing streams. The ground surface elevation at the estimation point must also be determined. In the proposed method, separate models are generated for the spatial variability of the water table and ground surface elevation and for the dependence between these variables. After the models have been validated, cokriging or minimum variance unbiased estimation is used to obtain the estimated water table elevations and their estimation variances. For the Pits and Trenches area (formerly a liquid radioactive waste disposal facility) near Oak Ridge National Laboratory, water table estimation along a linear section, both with and without the inclusion of ground surface elevation as a statistical predictor, illustrate the advantages of the cokriging model

  2. Validation of generic cost estimates for construction-related activities at nuclear power plants: Final report

    International Nuclear Information System (INIS)

    Simion, G.; Sciacca, F.; Claiborne, E.; Watlington, B.; Riordan, B.; McLaughlin, M.

    1988-05-01

    This report represents a validation study of the cost methodologies and quantitative factors derived in Labor Productivity Adjustment Factors and Generic Methodology for Estimating the Labor Cost Associated with the Removal of Hardware, Materials, and Structures From Nuclear Power Plants. This cost methodology was developed to support NRC analysts in determining generic estimates of removal, installation, and total labor costs for construction-related activities at nuclear generating stations. In addition to the validation discussion, this report reviews the generic cost analysis methodology employed. It also discusses each of the individual cost factors used in estimating the costs of physical modifications at nuclear power plants. The generic estimating approach presented uses the /open quotes/greenfield/close quotes/ or new plant construction installation costs compiled in the Energy Economic Data Base (EEDB) as a baseline. These baseline costs are then adjusted to account for labor productivity, radiation fields, learning curve effects, and impacts on ancillary systems or components. For comparisons of estimated vs actual labor costs, approximately four dozen actual cost data points (as reported by 14 nuclear utilities) were obtained. Detailed background information was collected on each individual data point to give the best understanding possible so that the labor productivity factors, removal factors, etc., could judiciously be chosen. This study concludes that cost estimates that are typically within 40% of the actual values can be generated by prudently using the methodologies and cost factors investigated herein

  3. Weibull Parameters Estimation Based on Physics of Failure Model

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... for degradation modeling and failure criteria determination. The time dependent accumulated damage is assumed linearly proportional to the time dependent degradation level. It is observed that the deterministic accumulated damage at the level of unity closely estimates the characteristic fatigue life of Weibull...

  4. A Dynamic Travel Time Estimation Model Based on Connected Vehicles

    Directory of Open Access Journals (Sweden)

    Daxin Tian

    2015-01-01

    Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.

  5. Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Science.gov (United States)

    These model-based estimates use two surveys, the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS). The two surveys are combined using novel statistical methodology.

  6. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  7. Evaluation of GPM IMERG Early, Late, and Final rainfall estimates using WegenerNet gauge data in southeastern Austria

    Science.gov (United States)

    O, Sungmin; Foelsche, Ulrich; Kirchengast, Gottfried; Fuchsberger, Juergen; Tan, Jackson; Petersen, Walter A.

    2017-12-01

    The Global Precipitation Measurement (GPM) Integrated Multi-satellite Retrievals for GPM (IMERG) products provide quasi-global (60° N-60° S) precipitation estimates, beginning March 2014, from the combined use of passive microwave (PMW) and infrared (IR) satellites comprising the GPM constellation. The IMERG products are available in the form of near-real-time data, i.e., IMERG Early and Late, and in the form of post-real-time research data, i.e., IMERG Final, after monthly rain gauge analysis is received and taken into account. In this study, IMERG version 3 Early, Late, and Final (IMERG-E,IMERG-L, and IMERG-F) half-hourly rainfall estimates are compared with gauge-based gridded rainfall data from the WegenerNet Feldbach region (WEGN) high-density climate station network in southeastern Austria. The comparison is conducted over two IMERG 0.1° × 0.1° grid cells, entirely covered by 40 and 39 WEGN stations each, using data from the extended summer season (April-October) for the first two years of the GPM mission. The entire data are divided into two rainfall intensity ranges (low and high) and two seasons (warm and hot), and we evaluate the performance of IMERG, using both statistical and graphical methods. Results show that IMERG-F rainfall estimates are in the best overall agreement with the WEGN data, followed by IMERG-L and IMERG-E estimates, particularly for the hot season. We also illustrate, through rainfall event cases, how insufficient PMW sources and errors in motion vectors can lead to wide discrepancies in the IMERG estimates. Finally, by applying the method of Villarini and Krajewski (2007), we find that IMERG-F half-hourly rainfall estimates can be regarded as a 25 min gauge accumulation, with an offset of +40 min relative to its nominal time.

  8. NONLINEAR PLANT PIECEWISE-CONTINUOUS MODEL MATRIX PARAMETERS ESTIMATION

    Directory of Open Access Journals (Sweden)

    Roman L. Leibov

    2017-09-01

    Full Text Available This paper presents a nonlinear plant piecewise-continuous model matrix parameters estimation technique using nonlinear model time responses and random search method. One of piecewise-continuous model application areas is defined. The results of proposed approach application for aircraft turbofan engine piecewisecontinuous model formation are presented

  9. Advanced numerical modelling of a fire. Final report

    International Nuclear Information System (INIS)

    Heikkilae, L.; Keski-Rahkonen, O.

    1996-03-01

    Experience and probabilistic risk assessments show that fires present a major hazard in a nuclear power plant (NPP). The PALOME project (1988-92) improved the quality of numerical simulation of fires to make it a useful tool for fire safety analysis. Some of the most advanced zone model fire simulation codes were acquired. The performance of the codes was studied through literature and personal interviews in earlier studies and BRI2 code from the Japanese Building Research Institute was selected for further use. In PALOME 2 project this work was continued. Information obtained from large-scale fire tests at the German HDR facility allowed reliable prediction of the rate of heat release and was used for code validation. BRI2 code was validated particularly by participation in the CEC standard problem 'Prediction of effects caused by a cable fire experiment within the HDR-facility'. Participation in the development of a new field model code SOFIE specifically for fire applications as British-Swedish-Finnish cooperation was one of the goals of the project. SOFIE code was implemented at VTT and the first results of validation simulations were obtained. Well instrumented fire tests on electronic cabinets were carried out to determine source terms for simulation of room fires and to estimate fire spread to adjacent cabinets. The particular aim of this study was to measure the rate of heat release from a fire in an electronic cabinet. From the three tests, differing mainly in the amount of the fire load, data was obtained for source terms in numerical modelling of fires in rooms containing electronic cabinets. On the basis of these tests also a simple natural ventilation model was derived. (19 refs.)

  10. EXPERIMENTS AND COMPUTATIONAL MODELING OF PULVERIZED-COAL IGNITION; FINAL

    International Nuclear Information System (INIS)

    Samuel Owusu-Ofori; John C. Chen

    1999-01-01

    Under typical conditions of pulverized-coal combustion, which is characterized by fine particles heated at very high rates, there is currently a lack of certainty regarding the ignition mechanism of bituminous and lower rank coals as well as the ignition rate of reaction. furthermore, there have been no previous studies aimed at examining these factors under various experimental conditions, such as particle size, oxygen concentration, and heating rate. Finally, there is a need to improve current mathematical models of ignition to realistically and accurately depict the particle-to-particle variations that exist within a coal sample. Such a model is needed to extract useful reaction parameters from ignition studies, and to interpret ignition data in a more meaningful way. The authors propose to examine fundamental aspects of coal ignition through (1) experiments to determine the ignition temperature of various coals by direct measurement, and (2) modeling of the ignition process to derive rate constants and to provide a more insightful interpretation of data from ignition experiments. The authors propose to use a novel laser-based ignition experiment to achieve their first objective. Laser-ignition experiments offer the distinct advantage of easy optical access to the particles because of the absence of a furnace or radiating walls, and thus permit direct observation and particle temperature measurement. The ignition temperature of different coals under various experimental conditions can therefore be easily determined by direct measurement using two-color pyrometry. The ignition rate-constants, when the ignition occurs heterogeneously, and the particle heating rates will both be determined from analyses based on these measurements

  11. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  12. Extreme gust wind estimation using mesoscale modeling

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Kruger, Andries

    2014-01-01

    , surface turbulence characteristics. In this study, we follow a theory that is different from the local gust concept as described above. In this theory, the gust at the surface is non-local; it is produced by the deflection of air parcels flowing in the boundary layer and brought down to the surface...... from the Danish site Høvsøre help us to understand the limitation of the traditional method. Good agreement was found between the extreme gust atlases for South Africa and the existing map made from a limited number of measurements across the country. Our study supports the non-local gust theory. While...... through turbulent eddies. This process is modeled using the mesoscale Weather Forecasting and Research (WRF) model. The gust at the surface is calculated as the largest winds over a layer where the averaged turbulence kinetic energy is greater than the averaged buoyancy force. The experiments have been...

  13. Land-use change and carbon sinks: Econometric estimation of the carbon sequestration supply function; FINAL

    International Nuclear Information System (INIS)

    Lubowski, Ruben N.; Plantinga, Andrew J.; Stavins, Robert N.

    2001-01-01

    Increased attention by policy makers to the threat of global climate change has brought with it considerable interest in the possibility of encouraging the expansion of forest area as a means of sequestering carbon dioxide. The marginal costs of carbon sequestration or, equivalently, the carbon sequestration supply function will determine the ultimate effects and desirability of policies aimed at enhancing carbon uptake. In particular, marginal sequestration conts are the critical statistic for identifying a cost-effective policy mix to mitigate net carbon dioxide emissions. We develop a framework for conducting an econometric analysis of land use for the forty-eight contiguous United States and employing it to estimate the carbon sequestration supply function. By estimating the opportunity costs of land on the basis of econometric evidence of landowners' actual behavior, we aim to circumvent many of the shortcomings of previous sequestration cost assessments. By conducting the first nationwide econometric estimation of sequestration costs, endogenizing prices for land-based commodities, and estimating land-use transition probabilities in a framework that explicitly considers the range of land-use alternatives, we hope to provide better estimates eventually of the true costs of large-scale carbon sequestration efforts. In this way, we seek to add to understanding of the costs and potential of this strategy for addressing the threat of global climate change

  14. Parameter estimation in stochastic rainfall-runoff models

    DEFF Research Database (Denmark)

    Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur

    2006-01-01

    A parameter estimation method for stochastic rainfall-runoff models is presented. The model considered in the paper is a conceptual stochastic model, formulated in continuous-discrete state space form. The model is small and a fully automatic optimization is, therefore, possible for estimating all...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...

  15. Use on non-conjugate prior distributions in compound failure models. Final technical report

    International Nuclear Information System (INIS)

    Shultis, J.K.; Johnson, D.E.; Milliken, G.A.; Eckhoff, N.D.

    1981-12-01

    Several theoretical and computational techniques are presented for compound failure models in which the failure rate or failure probability for a class of components is considered to be a random variable. Both the failure-on-demand and failure-rate situation are considered. Ten different prior families are presented for describing the variation or uncertainty of the failure parameter. Methods considered for estimating values for the prior parameters from a given set of failure data are (1) matching data moments to those of the prior distribution, (2) matching data moments to those of the compound marginal distribution, and (3) the marginal maximum likelihood method. Numerical methods for computing the parameter estimators for all ten prior families are presented, as well as methods for obtaining estimates of the variances and covariance of the parameter estimators, it is shown that various confidence, probability, and tolerance intervals can be evaluated. Finally, to test the resulting failure models against the given failure data, generalized chi-squage and Kolmogorov-Smirnov goodness-of-fit tests are proposed together with a test to eliminate outliers from the failure data. Computer codes based on the results presented here have been prepared and are presented in a companion report

  16. Model improves oil field operating cost estimates

    International Nuclear Information System (INIS)

    Glaeser, J.L.

    1996-01-01

    A detailed operating cost model that forecasts operating cost profiles toward the end of a field's life should be constructed for testing depletion strategies and plans for major oil fields. Developing a good understanding of future operating cost trends is important. Incorrectly forecasting the trend can result in bad decision making regarding investments and reservoir operating strategies. Recent projects show that significant operating expense reductions can be made in the latter stages o field depletion without significantly reducing the expected ultimate recoverable reserves. Predicting future operating cost trends is especially important for operators who are currently producing a field and must forecast the economic limit of the property. For reasons presented in this article, it is usually not correct to either assume that operating expense stays fixed in dollar terms throughout the lifetime of a field, nor is it correct to assume that operating costs stay fixed on a dollar per barrel basis

  17. Explicit estimating equations for semiparametric generalized linear latent variable models

    KAUST Repository

    Ma, Yanyuan

    2010-07-05

    We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.

  18. Fundamental Frequency and Model Order Estimation Using Spatial Filtering

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    extend this procedure to account for inharmonicity using unconstrained model order estimation. The simulations show that beamforming improves the performance of the joint estimates of fundamental frequency and the number of harmonics in low signal to interference (SIR) levels, and an experiment......In signal processing applications of harmonic-structured signals, estimates of the fundamental frequency and number of harmonics are often necessary. In real scenarios, a desired signal is contaminated by different levels of noise and interferers, which complicate the estimation of the signal...... parameters. In this paper, we present an estimation procedure for harmonic-structured signals in situations with strong interference using spatial filtering, or beamforming. We jointly estimate the fundamental frequency and the constrained model order through the output of the beamformers. Besides that, we...

  19. Cost estimates for near-term depolyment of advanced traffic management systems. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, S.S.; Chin, S.M.

    1993-02-15

    The objective of this study is to provide cost est engineering, design, installation, operation and maintenance of Advanced Traffic Management Systems (ATMS) in the largest 75 metropolitan areas in the United States. This report gives estimates for deployment costs for ATMS in the next five years, subject to the qualifications and caveats set out in following paragraphs. The report considers infrastructure components required to realize fully a functional ATMS over each of two highway networks (as discussed in the Section describing our general assumptions) under each of the four architectures identified in the MITRE Intelligent Vehicle Highway Systems (IVHS) Architecture studies. The architectures are summarized in this report in Table 2. Estimates are given for eight combinations of highway networks and architectures. We estimate that it will cost between $8.5 Billion (minimal network) and $26 Billion (augmented network) to proceed immediately with deployment of ATMS in the largest 75 metropolitan areas. Costs are given in 1992 dollars, and are not adjusted for future inflation. Our estimates are based partially on completed project costs, which have been adjusted to 1992 dollars. We assume that a particular architecture will be chosen; projected costs are broken by architecture.

  20. Estimating varying coefficients for partial differential equation models.

    Science.gov (United States)

    Zhang, Xinyu; Cao, Jiguo; Carroll, Raymond J

    2017-09-01

    Partial differential equations (PDEs) are used to model complex dynamical systems in multiple dimensions, and their parameters often have important scientific interpretations. In some applications, PDE parameters are not constant but can change depending on the values of covariates, a feature that we call varying coefficients. We propose a parameter cascading method to estimate varying coefficients in PDE models from noisy data. Our estimates of the varying coefficients are shown to be consistent and asymptotically normally distributed. The performance of our method is evaluated by a simulation study and by an empirical study estimating three varying coefficients in a PDE model arising from LIDAR data. © 2017, The International Biometric Society.

  1. Characterize and Model Final Waste Formulations and Offgas Solids from Thermal Treatment Processes - FY-98 Final Report for LDRD 2349

    Energy Technology Data Exchange (ETDEWEB)

    Kessinger, Glen Frank; Nelson, Lee Orville; Grandy, Jon Drue; Zuck, Larry Douglas; Kong, Peter Chuen Sun; Anderson, Gail

    1999-08-01

    The purpose of LDRD #2349, Characterize and Model Final Waste Formulations and Offgas Solids from Thermal Treatment Processes, was to develop a set of tools that would allow the user to, based on the chemical composition of a waste stream to be immobilized, predict the durability (leach behavior) of the final waste form and the phase assemblages present in the final waste form. The objectives of the project were: • investigation, testing and selection of thermochemical code • development of auxiliary thermochemical database • synthesis of materials for leach testing • collection of leach data • using leach data for leach model development • thermochemical modeling The progress toward completion of these objectives and a discussion of work that needs to be completed to arrive at a logical finishing point for this project will be presented.

  2. Comparing estimates of genetic variance across different relationship models.

    Science.gov (United States)

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Information matrix estimation procedures for cognitive diagnostic models.

    Science.gov (United States)

    Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei

    2018-03-06

    Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.

  4. Estimating cardiovascular disease incidence from prevalence: a spreadsheet based model

    Directory of Open Access Journals (Sweden)

    Xue Feng Hu

    2017-01-01

    Full Text Available Abstract Background Disease incidence and prevalence are both core indicators of population health. Incidence is generally not as readily accessible as prevalence. Cohort studies and electronic health record systems are two major way to estimate disease incidence. The former is time-consuming and expensive; the latter is not available in most developing countries. Alternatively, mathematical models could be used to estimate disease incidence from prevalence. Methods We proposed and validated a method to estimate the age-standardized incidence of cardiovascular disease (CVD, with prevalence data from successive surveys and mortality data from empirical studies. Hallett’s method designed for estimating HIV infections in Africa was modified to estimate the incidence of myocardial infarction (MI in the U.S. population and incidence of heart disease in the Canadian population. Results Model-derived estimates were in close agreement with observed incidence from cohort studies and population surveillance systems. This method correctly captured the trend in incidence given sufficient waves of cross-sectional surveys. The estimated MI declining rate in the U.S. population was in accordance with the literature. This method was superior to closed cohort, in terms of the estimating trend of population cardiovascular disease incidence. Conclusion It is possible to estimate CVD incidence accurately at the population level from cross-sectional prevalence data. This method has the potential to be used for age- and sex- specific incidence estimates, or to be expanded to other chronic conditions.

  5. Asymptotics for Estimating Equations in Hidden Markov Models

    DEFF Research Database (Denmark)

    Hansen, Jørgen Vinsløv; Jensen, Jens Ledet

    Results on asymptotic normality for the maximum likelihood estimate in hidden Markov models are extended in two directions. The stationarity assumption is relaxed, which allows for a covariate process influencing the hidden Markov process. Furthermore a class of estimating equations is considered...

  6. Online State Space Model Parameter Estimation in Synchronous Machines

    Directory of Open Access Journals (Sweden)

    Z. Gallehdari

    2014-06-01

    The suggested approach is evaluated for a sample synchronous machine model. Estimated parameters are tested for different inputs at different operating conditions. The effect of noise is also considered in this study. Simulation results show that the proposed approach provides good accuracy for parameter estimation.

  7. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  8. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  9. Person Appearance Modeling and Orientation Estimation using Spherical Harmonics

    NARCIS (Netherlands)

    Liem, M.C.; Gavrila, D.M.

    2013-01-01

    We present a novel approach for the joint estimation of a person's overall body orientation, 3D shape and texture, from overlapping cameras. Overall body orientation (i.e. rotation around torso major axis) is estimated by minimizing the difference between a learned texture model in a canonical

  10. Inverse Gaussian model for small area estimation via Gibbs sampling

    African Journals Online (AJOL)

    We present a Bayesian method for estimating small area parameters under an inverse Gaussian model. The method is extended to estimate small area parameters for finite populations. The Gibbs sampler is proposed as a mechanism for implementing the Bayesian paradigm. We illustrate the method by application to ...

  11. Performances of estimators of linear auto-correlated error model ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with autocorrelated disturbance terms are compared when the independent variable is exponential. The results reveal that for both small and large samples, the Ordinary Least Squares (OLS) compares favourably with the Generalized least Squares (GLS) estimators in ...

  12. Nonparametric volatility density estimation for discrete time models

    NARCIS (Netherlands)

    Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.

    2005-01-01

    We consider discrete time models for asset prices with a stationary volatility process. We aim at estimating the multivariate density of this process at a set of consecutive time instants. A Fourier-type deconvolution kernel density estimator based on the logarithm of the squared process is proposed

  13. Dimensional Model for Estimating Factors influencing Childhood Obesity: Path Analysis Based Modeling

    Directory of Open Access Journals (Sweden)

    Maryam Kheirollahpour

    2014-01-01

    Full Text Available The main objective of this study is to identify and develop a comprehensive model which estimates and evaluates the overall relations among the factors that lead to weight gain in children by using structural equation modeling. The proposed models in this study explore the connection among the socioeconomic status of the family, parental feeding practice, and physical activity. Six structural models were tested to identify the direct and indirect relationship between the socioeconomic status and parental feeding practice general level of physical activity, and weight status of children. Finally, a comprehensive model was devised to show how these factors relate to each other as well as to the body mass index (BMI of the children simultaneously. Concerning the methodology of the current study, confirmatory factor analysis (CFA was applied to reveal the hidden (secondary effect of socioeconomic factors on feeding practice and ultimately on the weight status of the children and also to determine the degree of model fit. The comprehensive structural model tested in this study suggested that there are significant direct and indirect relationships among variables of interest. Moreover, the results suggest that parental feeding practice and physical activity are mediators in the structural model.

  14. Parameter Estimates in Differential Equation Models for Population Growth

    Science.gov (United States)

    Winkel, Brian J.

    2011-01-01

    We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…

  15. Review Genetic prediction models and heritability estimates for ...

    African Journals Online (AJOL)

    edward

    2015-05-09

    May 9, 2015 ... Instead, through stepwise inclusion of type traits in the PH model, the .... Great Britain uses a bivariate animal model for all breeds, ... Štípková, 2012) and then applying linear models to the combined datasets with the ..... multivariate analyses, it is difficult to use indicator traits to estimate longevity early in life ...

  16. Parameter estimation of electricity spot models from futures prices

    NARCIS (Netherlands)

    Aihara, ShinIchi; Bagchi, Arunabha; Imreizeeq, E.S.N.; Walter, E.

    We consider a slight perturbation of the Schwartz-Smith model for the electricity futures prices and the resulting modified spot model. Using the martingale property of the modified price under the risk neutral measure, we derive the arbitrage free model for the spot and futures prices. We estimate

  17. Estimating the Competitive Storage Model with Trending Commodity Prices

    OpenAIRE

    Gouel , Christophe; LEGRAND , Nicolas

    2017-01-01

    We present a method to estimate jointly the parameters of a standard commodity storage model and the parameters characterizing the trend in commodity prices. This procedure allows the influence of a possible trend to be removed without restricting the model specification, and allows model and trend selection based on statistical criteria. The trend is modeled deterministically using linear or cubic spline functions of time. The results show that storage models with trend are always preferred ...

  18. Functional Mixed Effects Model for Small Area Estimation.

    Science.gov (United States)

    Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou

    2016-09-01

    Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.

  19. Development of simple kinetic models and parameter estimation for ...

    African Journals Online (AJOL)

    PANCHIGA

    2016-09-28

    Sep 28, 2016 ... estimation for simulation of recombinant human serum albumin ... and recombinant protein production by P. pastoris without requiring complex models. Key words: ..... SDS-PAGE and showed the same molecular size as.

  20. COPS model estimates of LLEA availability near selected reactor sites

    International Nuclear Information System (INIS)

    Berkbigler, K.P.

    1979-11-01

    The COPS computer model has been used to estimate local law enforcement agency (LLEA) officer availability in the neighborhood of selected nuclear reactor sites. The results of these analyses are presented both in graphic and tabular form in this report

  1. Censored rainfall modelling for estimation of fine-scale extremes

    Science.gov (United States)

    Cross, David; Onof, Christian; Winter, Hugo; Bernardara, Pietro

    2018-01-01

    Reliable estimation of rainfall extremes is essential for drainage system design, flood mitigation, and risk quantification. However, traditional techniques lack physical realism and extrapolation can be highly uncertain. In this study, we improve the physical basis for short-duration extreme rainfall estimation by simulating the heavy portion of the rainfall record mechanistically using the Bartlett-Lewis rectangular pulse (BLRP) model. Mechanistic rainfall models have had a tendency to underestimate rainfall extremes at fine temporal scales. Despite this, the simple process representation of rectangular pulse models is appealing in the context of extreme rainfall estimation because it emulates the known phenomenology of rainfall generation. A censored approach to Bartlett-Lewis model calibration is proposed and performed for single-site rainfall from two gauges in the UK and Germany. Extreme rainfall estimation is performed for each gauge at the 5, 15, and 60 min resolutions, and considerations for censor selection discussed.

  2. Empirical model for estimating the surface roughness of machined ...

    African Journals Online (AJOL)

    Empirical model for estimating the surface roughness of machined ... as well as surface finish is one of the most critical quality measure in mechanical products. ... various cutting speed have been developed using regression analysis software.

  3. Context Tree Estimation in Variable Length Hidden Markov Models

    OpenAIRE

    Dumont, Thierry

    2011-01-01

    We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exp...

  4. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    Science.gov (United States)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  5. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin; Cheng, Yebin; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  6. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin

    2017-12-16

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  7. Estimation and prediction under local volatility jump-diffusion model

    Science.gov (United States)

    Kim, Namhyoung; Lee, Younhee

    2018-02-01

    Volatility is an important factor in operating a company and managing risk. In the portfolio optimization and risk hedging using the option, the value of the option is evaluated using the volatility model. Various attempts have been made to predict option value. Recent studies have shown that stochastic volatility models and jump-diffusion models reflect stock price movements accurately. However, these models have practical limitations. Combining them with the local volatility model, which is widely used among practitioners, may lead to better performance. In this study, we propose a more effective and efficient method of estimating option prices by combining the local volatility model with the jump-diffusion model and apply it using both artificial and actual market data to evaluate its performance. The calibration process for estimating the jump parameters and local volatility surfaces is divided into three stages. We apply the local volatility model, stochastic volatility model, and local volatility jump-diffusion model estimated by the proposed method to KOSPI 200 index option pricing. The proposed method displays good estimation and prediction performance.

  8. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    Science.gov (United States)

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  9. Estimating Convection Parameters in the GFDL CM2.1 Model Using Ensemble Data Assimilation

    Science.gov (United States)

    Li, Shan; Zhang, Shaoqing; Liu, Zhengyu; Lu, Lv; Zhu, Jiang; Zhang, Xuefeng; Wu, Xinrong; Zhao, Ming; Vecchi, Gabriel A.; Zhang, Rong-Hua; Lin, Xiaopei

    2018-04-01

    Parametric uncertainty in convection parameterization is one major source of model errors that cause model climate drift. Convection parameter tuning has been widely studied in atmospheric models to help mitigate the problem. However, in a fully coupled general circulation model (CGCM), convection parameters which impact the ocean as well as the climate simulation may have different optimal values. This study explores the possibility of estimating convection parameters with an ensemble coupled data assimilation method in a CGCM. Impacts of the convection parameter estimation on climate analysis and forecast are analyzed. In a twin experiment framework, five convection parameters in the GFDL coupled model CM2.1 are estimated individually and simultaneously under both perfect and imperfect model regimes. Results show that the ensemble data assimilation method can help reduce the bias in convection parameters. With estimated convection parameters, the analyses and forecasts for both the atmosphere and the ocean are generally improved. It is also found that information in low latitudes is relatively more important for estimating convection parameters. This study further suggests that when important parameters in appropriate physical parameterizations are identified, incorporating their estimation into traditional ensemble data assimilation procedure could improve the final analysis and climate prediction.

  10. FACTORS INFLUENCING THE ACCURACY OF PRE-CONTRACT STAGE ESTIMATION OF FINAL CONTRACT PRICE IN NEW ZEALAND

    Directory of Open Access Journals (Sweden)

    Cong Ji

    2014-12-01

    Full Text Available Establishing and prioritising the factors that may influence the final contract price when responding to a call for tenders is crucial for proper risk analysis and reliable forecasting; it could make or mar the ability to achieve expected profit margin in an era of lump sum fixed price contracts where clients often contest variation claims. In New Zealand, these factors have not been researched; hence estimators rely only on judgement to ‘guess-estimate’ in their price forecasting. This study aimed to fill the knowledge gap by investigating the priority factors. 150 responses from professional members of the New Zealand Institute of Quantity Surveyors were analysed using multi-attribute method. Results showed thirty-seven factors which could influence the final contract price; the three most influential being poor tender documentation, complexity of design & construction, and completeness of project information. Other factors relating to project, client and contractor characteristics, design consultants and tendering conditions, estimating practice and external factors were reported. Concordance analysis indicated high level of agreement amongst survey participants in the rank-ordering of the relative importance of the identified factors. The findings could assist quantity surveyors to prepare more reliable contract price estimates at the pre-contract stage. It would also improve construction-stage cost control.

  11. Estimation of Nonlinear Dynamic Panel Data Models with Individual Effects

    Directory of Open Access Journals (Sweden)

    Yi Hu

    2014-01-01

    Full Text Available This paper suggests a generalized method of moments (GMM based estimation for dynamic panel data models with individual specific fixed effects and threshold effects simultaneously. We extend Hansen’s (Hansen, 1999 original setup to models including endogenous regressors, specifically, lagged dependent variables. To address the problem of endogeneity of these nonlinear dynamic panel data models, we prove that the orthogonality conditions proposed by Arellano and Bond (1991 are valid. The threshold and slope parameters are estimated by GMM, and asymptotic distribution of the slope parameters is derived. Finite sample performance of the estimation is investigated through Monte Carlo simulations. It shows that the threshold and slope parameter can be estimated accurately and also the finite sample distribution of slope parameters is well approximated by the asymptotic distribution.

  12. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.

    Science.gov (United States)

    Wiecki, Thomas V; Sofer, Imri; Frank, Michael J

    2013-01-01

    The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/

  13. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python

    Directory of Open Access Journals (Sweden)

    Thomas V Wiecki

    2013-08-01

    Full Text Available The diffusion model is a commonly used tool to infer latent psychological processes underlying decision making, and to link them to neural mechanisms based on reaction times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of reaction time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model, which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject / condition than non-hierarchical method, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g. fMRI influence decision making parameters. This paper will first describe the theoretical background of drift-diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the chi-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs

  14. Evaluation of GPM IMERG Early, Late, and Final rainfall estimates using WegenerNet gauge data in southeastern Austria

    Directory of Open Access Journals (Sweden)

    S. O

    2017-12-01

    Full Text Available The Global Precipitation Measurement (GPM Integrated Multi-satellite Retrievals for GPM (IMERG products provide quasi-global (60° N–60° S precipitation estimates, beginning March 2014, from the combined use of passive microwave (PMW and infrared (IR satellites comprising the GPM constellation. The IMERG products are available in the form of near-real-time data, i.e., IMERG Early and Late, and in the form of post-real-time research data, i.e., IMERG Final, after monthly rain gauge analysis is received and taken into account. In this study, IMERG version 3 Early, Late, and Final (IMERG-E,IMERG-L, and IMERG-F half-hourly rainfall estimates are compared with gauge-based gridded rainfall data from the WegenerNet Feldbach region (WEGN high-density climate station network in southeastern Austria. The comparison is conducted over two IMERG 0.1°  ×  0.1° grid cells, entirely covered by 40 and 39 WEGN stations each, using data from the extended summer season (April–October for the first two years of the GPM mission. The entire data are divided into two rainfall intensity ranges (low and high and two seasons (warm and hot, and we evaluate the performance of IMERG, using both statistical and graphical methods. Results show that IMERG-F rainfall estimates are in the best overall agreement with the WEGN data, followed by IMERG-L and IMERG-E estimates, particularly for the hot season. We also illustrate, through rainfall event cases, how insufficient PMW sources and errors in motion vectors can lead to wide discrepancies in the IMERG estimates. Finally, by applying the method of Villarini and Krajewski (2007, we find that IMERG-F half-hourly rainfall estimates can be regarded as a 25 min gauge accumulation, with an offset of +40 min relative to its nominal time.

  15. Reservoir architecture modeling: Nonstationary models for quantitative geological characterization. Final report, April 30, 1998

    Energy Technology Data Exchange (ETDEWEB)

    Kerr, D.; Epili, D.; Kelkar, M.; Redner, R.; Reynolds, A.

    1998-12-01

    The study was comprised of four investigations: facies architecture; seismic modeling and interpretation; Markov random field and Boolean models for geologic modeling of facies distribution; and estimation of geological architecture using the Bayesian/maximum entropy approach. This report discusses results from all four investigations. Investigations were performed using data from the E and F units of the Middle Frio Formation, Stratton Field, one of the major reservoir intervals in the Gulf Coast Basin.

  16. Estimates of population distributions and tailings areas around licensed uranium mill sites. Final report

    International Nuclear Information System (INIS)

    Hans, J.M.; Hall, J.B.; Moore, W.E.

    1986-08-01

    Population distributions and tailings areas were estimated from aerial photography for each of 21 licensed uranium millsites. Approximately 11,600 persons live within 5 kilometers of the tailings impoundments at the millsites. About 82% of these persons live near five of the millsites. No persons were found living within 5 kilometers of six of the millsites. Tailings area measurements include the surface area of tailings in impoundments, heap-leached ore, and carryover tailings in evaporation ponds. Approximately 4,000 acres of tailings surfaces were measured for the 21 millsites. About 55% of the tailings surfaces were dry, 11% wet, and the remainder ponded. The average tailings surface area for the millsites is about 200 acres and ranges from 7 to 813 acres

  17. Estimation of Threshold for the Signals of the BLMs around the LHC Final Focus Triplet Magnets

    CERN Document Server

    Cerutti, F; Ferrari, A; Hoa, C; Mauri, M; Mereghetti, A; Sapinski, M; Wildner, E; CERN. Geneva. ATS Department

    2012-01-01

    The Interaction Points of the Large Hadron Collider are the regions where the two circulating beams collide. Hence, the magnets the closest to any Interaction Point are exposed to an elevated radiation field due to the collision debris. In this study the signal in the Beam Loss Monitors due to the debris is estimated. In addition, for three different scenarios of beam losses, the energy density in the coils and the signal in the Beam Loss Monitors at quench are computed. It is shown that the Beam Loss Monitors, as presently installed on the vacuum vessel of the magnets, cannot disentangle the signal due to a localised loss from the constant signal due to the debris in case of steady-state losses.

  18. Final Report: Sampling-Based Algorithms for Estimating Structure in Big Data.

    Energy Technology Data Exchange (ETDEWEB)

    Matulef, Kevin Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    The purpose of this project was to develop sampling-based algorithms to discover hidden struc- ture in massive data sets. Inferring structure in large data sets is an increasingly common task in many critical national security applications. These data sets come from myriad sources, such as network traffic, sensor data, and data generated by large-scale simulations. They are often so large that traditional data mining techniques are time consuming or even infeasible. To address this problem, we focus on a class of algorithms that do not compute an exact answer, but instead use sampling to compute an approximate answer using fewer resources. The particular class of algorithms that we focus on are streaming algorithms , so called because they are designed to handle high-throughput streams of data. Streaming algorithms have only a small amount of working storage - much less than the size of the full data stream - so they must necessarily use sampling to approximate the correct answer. We present two results: * A streaming algorithm called HyperHeadTail , that estimates the degree distribution of a graph (i.e., the distribution of the number of connections for each node in a network). The degree distribution is a fundamental graph property, but prior work on estimating the degree distribution in a streaming setting was impractical for many real-world application. We improve upon prior work by developing an algorithm that can handle streams with repeated edges, and graph structures that evolve over time. * An algorithm for the task of maintaining a weighted subsample of items in a stream, when the items must be sampled according to their weight, and the weights are dynamically changing. To our knowledge, this is the first such algorithm designed for dynamically evolving weights. We expect it may be useful as a building block for other streaming algorithms on dynamic data sets.

  19. Bayesian estimation of parameters in a regional hydrological model

    Directory of Open Access Journals (Sweden)

    K. Engeland

    2002-01-01

    Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis

  20. Bayesian Nonparametric Model for Estimating Multistate Travel Time Distribution

    Directory of Open Access Journals (Sweden)

    Emmanuel Kidando

    2017-01-01

    Full Text Available Multistate models, that is, models with more than two distributions, are preferred over single-state probability models in modeling the distribution of travel time. Literature review indicated that the finite multistate modeling of travel time using lognormal distribution is superior to other probability functions. In this study, we extend the finite multistate lognormal model of estimating the travel time distribution to unbounded lognormal distribution. In particular, a nonparametric Dirichlet Process Mixture Model (DPMM with stick-breaking process representation was used. The strength of the DPMM is that it can choose the number of components dynamically as part of the algorithm during parameter estimation. To reduce computational complexity, the modeling process was limited to a maximum of six components. Then, the Markov Chain Monte Carlo (MCMC sampling technique was employed to estimate the parameters’ posterior distribution. Speed data from nine links of a freeway corridor, aggregated on a 5-minute basis, were used to calculate the corridor travel time. The results demonstrated that this model offers significant flexibility in modeling to account for complex mixture distributions of the travel time without specifying the number of components. The DPMM modeling further revealed that freeway travel time is characterized by multistate or single-state models depending on the inclusion of onset and offset of congestion periods.

  1. Modeling the Pan-Arctic terrestrial and atmospheric water cycle. Final report; FINAL

    International Nuclear Information System (INIS)

    Gutowski, W.J. Jr.

    2001-01-01

    This report describes results of DOE grant DE-FG02-96ER61473 to Iowa State University (ISU). Work on this grant was performed at Iowa State University and at the University of New Hampshire in collaboration with Dr. Charles Vorosmarty and fellow scientists at the University of New Hampshire's (UNH's) Institute for the Study of the Earth, Oceans, and Space, a subcontractor to the project. Research performed for the project included development, calibration and validation of a regional climate model for the pan-Arctic, modeling river networks, extensive hydrologic database development, and analyses of the water cycle, based in part on the assembled databases and models. Details appear in publications produced from the grant

  2. Hydrological model uncertainty due to spatial evapotranspiration estimation methods

    Science.gov (United States)

    Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub

    2016-05-01

    Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.

  3. Marginal Maximum Likelihood Estimation of Item Response Models in R

    Directory of Open Access Journals (Sweden)

    Matthew S. Johnson

    2007-02-01

    Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.

  4. Estimation of the Thurstonian model for the 2-AC protocol

    DEFF Research Database (Denmark)

    Christensen, Rune Haubo Bojesen; Lee, Hye-Seong; Brockhoff, Per B.

    2012-01-01

    . This relationship makes it possible to extract estimates and standard errors of δ and τ from general statistical software, and furthermore, it makes it possible to combine standard regression modelling with the Thurstonian model for the 2-AC protocol. A model for replicated 2-AC data is proposed using cumulative......The 2-AC protocol is a 2-AFC protocol with a “no-difference” option and is technically identical to the paired preference test with a “no-preference” option. The Thurstonian model for the 2-AC protocol is parameterized by δ and a decision parameter τ, the estimates of which can be obtained...... by fairly simple well-known methods. In this paper we describe how standard errors of the parameters can be obtained and how exact power computations can be performed. We also show how the Thurstonian model for the 2-AC protocol is closely related to a statistical model known as a cumulative probit model...

  5. Bayesian analysis for uncertainty estimation of a canopy transpiration model

    Science.gov (United States)

    Samanta, S.; Mackay, D. S.; Clayton, M. K.; Kruger, E. L.; Ewers, B. E.

    2007-04-01

    A Bayesian approach was used to fit a conceptual transpiration model to half-hourly transpiration rates for a sugar maple (Acer saccharum) stand collected over a 5-month period and probabilistically estimate its parameter and prediction uncertainties. The model used the Penman-Monteith equation with the Jarvis model for canopy conductance. This deterministic model was extended by adding a normally distributed error term. This extension enabled using Markov chain Monte Carlo simulations to sample the posterior parameter distributions. The residuals revealed approximate conformance to the assumption of normally distributed errors. However, minor systematic structures in the residuals at fine timescales suggested model changes that would potentially improve the modeling of transpiration. Results also indicated considerable uncertainties in the parameter and transpiration estimates. This simple methodology of uncertainty analysis would facilitate the deductive step during the development cycle of deterministic conceptual models by accounting for these uncertainties while drawing inferences from data.

  6. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  7. [Using log-binomial model for estimating the prevalence ratio].

    Science.gov (United States)

    Ye, Rong; Gao, Yan-hui; Yang, Yi; Chen, Yue

    2010-05-01

    To estimate the prevalence ratios, using a log-binomial model with or without continuous covariates. Prevalence ratios for individuals' attitude towards smoking-ban legislation associated with smoking status, estimated by using a log-binomial model were compared with odds ratios estimated by logistic regression model. In the log-binomial modeling, maximum likelihood method was used when there were no continuous covariates and COPY approach was used if the model did not converge, for example due to the existence of continuous covariates. We examined the association between individuals' attitude towards smoking-ban legislation and smoking status in men and women. Prevalence ratio and odds ratio estimation provided similar results for the association in women since smoking was not common. In men however, the odds ratio estimates were markedly larger than the prevalence ratios due to a higher prevalence of outcome. The log-binomial model did not converge when age was included as a continuous covariate and COPY method was used to deal with the situation. All analysis was performed by SAS. Prevalence ratio seemed to better measure the association than odds ratio when prevalence is high. SAS programs were provided to calculate the prevalence ratios with or without continuous covariates in the log-binomial regression analysis.

  8. Efficient Estimation of Non-Linear Dynamic Panel Data Models with Application to Smooth Transition Models

    DEFF Research Database (Denmark)

    Gørgens, Tue; Skeels, Christopher L.; Wurtz, Allan

    This paper explores estimation of a class of non-linear dynamic panel data models with additive unobserved individual-specific effects. The models are specified by moment restrictions. The class includes the panel data AR(p) model and panel smooth transition models. We derive an efficient set...... of moment restrictions for estimation and apply the results to estimation of panel smooth transition models with fixed effects, where the transition may be determined endogenously. The performance of the GMM estimator, both in terms of estimation precision and forecasting performance, is examined in a Monte...

  9. On the Estimation of Standard Errors in Cognitive Diagnosis Models

    Science.gov (United States)

    Philipp, Michel; Strobl, Carolin; de la Torre, Jimmy; Zeileis, Achim

    2018-01-01

    Cognitive diagnosis models (CDMs) are an increasingly popular method to assess mastery or nonmastery of a set of fine-grained abilities in educational or psychological assessments. Several inference techniques are available to quantify the uncertainty of model parameter estimates, to compare different versions of CDMs, or to check model…

  10. Estimation of pure autoregressive vector models for revenue series ...

    African Journals Online (AJOL)

    This paper aims at applying multivariate approach to Box and Jenkins univariate time series modeling to three vector series. General Autoregressive Vector Models with time varying coefficients are estimated. The first vector is a response vector, while others are predictor vectors. By matrix expansion each vector, whether ...

  11. Vacuum expectation values for four-fermion operators. Model estimates

    International Nuclear Information System (INIS)

    Zhitnitskij, A.R.

    1985-01-01

    Some simple models (a system with a heavy quark, the rarefied insatanton gas) are used to investigate the problem of factorizability. Characteristics of vacuum fluctuations responsible for saturation of four-fermion vacuum expectation values which are known phenomenologically are discussed. A qualitative agreement between the model and phenomenologic;l estimates has been noted

  12. Vacuum expectation values of four-fermion operators. Model estimates

    International Nuclear Information System (INIS)

    Zhitnitskii, A.R.

    1985-01-01

    Simple models (a system with a heavy quark, a rarefied instanton gas) are used to study problems of factorizability. A discussion is given of the characteristics of the vacuum fluctuations responsible for saturation of the phenomenologically known four-fermion vacuum expectation values. Qualitative agreement between the model and phenomenological estimates is observed

  13. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  14. Simplification of an MCNP model designed for dose rate estimation

    Science.gov (United States)

    Laptev, Alexander; Perry, Robert

    2017-09-01

    A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.

  15. Simplification of an MCNP model designed for dose rate estimation

    Directory of Open Access Journals (Sweden)

    Laptev Alexander

    2017-01-01

    Full Text Available A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.

  16. Multiobjective Traffic Signal Control Model for Intersection Based on Dynamic Turning Movements Estimation

    Directory of Open Access Journals (Sweden)

    Pengpeng Jiao

    2014-01-01

    Full Text Available The real-time traffic signal control for intersection requires dynamic turning movements as the basic input data. It is impossible to detect dynamic turning movements directly through current traffic surveillance systems, but dynamic origin-destination (O-D estimation can obtain it. However, the combined models of dynamic O-D estimation and real-time traffic signal control are rare in the literature. A framework for the multiobjective traffic signal control model for intersection based on dynamic O-D estimation (MSC-DODE is presented. A state-space model using Kalman filtering is first formulated to estimate the dynamic turning movements; then a revised sequential Kalman filtering algorithm is designed to solve the model, and the root mean square error and mean percentage error are used to evaluate the accuracy of estimated dynamic turning proportions. Furthermore, a multiobjective traffic signal control model is put forward to achieve real-time signal control parameters and evaluation indices. Finally, based on practical survey data, the evaluation indices from MSC-DODE are compared with those from Webster method. The actual and estimated turning movements are further input into MSC-DODE, respectively, and results are also compared. Case studies show that results of MSC-DODE are better than those of Webster method and are very close to unavailable actual values.

  17. Distress modeling for DARWin-ME : final report.

    Science.gov (United States)

    2013-12-01

    Distress prediction models, or transfer functions, are key components of the Pavement M-E Design and relevant analysis. The accuracy of such models depends on a successful process of calibration and subsequent validation of model coefficients in the ...

  18. Improved air ventilation rate estimation based on a statistical model

    International Nuclear Information System (INIS)

    Brabec, M.; Jilek, K.

    2004-01-01

    A new approach to air ventilation rate estimation from CO measurement data is presented. The approach is based on a state-space dynamic statistical model, allowing for quick and efficient estimation. Underlying computations are based on Kalman filtering, whose practical software implementation is rather easy. The key property is the flexibility of the model, allowing various artificial regimens of CO level manipulation to be treated. The model is semi-parametric in nature and can efficiently handle time-varying ventilation rate. This is a major advantage, compared to some of the methods which are currently in practical use. After a formal introduction of the statistical model, its performance is demonstrated on real data from routine measurements. It is shown how the approach can be utilized in a more complex situation of major practical relevance, when time-varying air ventilation rate and radon entry rate are to be estimated simultaneously from concurrent radon and CO measurements

  19. Final report on the project research 'stochastic effects of irradiation and risk estimation'

    International Nuclear Information System (INIS)

    1989-03-01

    The title project research was carried out through 1983-1987, by three groups for the studies of radiation carcinogenesis, human genetic effects, and radiotoxicology. 8 reports by the first group, 3 by the second, and 6 by the third group are collected in this issue. The contents are as follows. Serial sacrifice study on tumorigenesis in male C57BL/6J mice exposed to gamma-ray or fast neutron radiation; Influence of biological variables on radiation carcinogenesis; Studies on radiation-induced thymic lymphomagenesis; Modifying factors of radiation induced myeloid leukemia of C3H/He mouse; Cell kinetic studies on radiation induced leukemogenesis; Cytogenetical studies on the mechanism of radiation induced neoplasmas; Molecular biological study on genetic stability of the genome; Protein factors regulating proliferation and differentiation of normal and neoplastic cells; Studies on dose-radiation relationships for induction of chromosome abberations in stem-spermatogonia of three crab-eating monkey after low and high dose rate γ-irradiation; Risk estimation of radiation mutagenesis in man by using cultured mammalian cells; Effects of ionizing radiation on male germ cells of crabeating monkey; Movement and metabolism of radioactive particles in the respiratory tract; Studies on dosimetry for internally deposited alpha-emitters; Comparative toxicological studies on the effects of internal exposures; Studies on treatment of alpha-radioactive wastes; Methodological studies on the inhalation of radioactive aerosols; Removal of transuranic elements by DTPA. (A. Y.)

  20. Size of the coming solar cycle 24 based on Ohl's Precursor Method, final estimate

    Directory of Open Access Journals (Sweden)

    R. P. Kane

    2010-07-01

    Full Text Available In Ohl's Precursor Method (Ohl, 1966, 1976, the geomagnetic activity during the declining phase of a sunspot cycle is shown to be well correlated with the size (maximum sunspot number Rz(max of the next cycle. For solar cycle 24, Kane (2007a used aa(min=15.5 (12-month running mean, which occurred during March–May of 2006 and made a preliminary estimate Rz(max=124±26 (12-month running mean. However, in the next few months, the aa index first increased and then decreased to a new low value of 14.8 in July 2007. With this new low value, the prediction was Rz(max=117±26 (12-month running mean. However, even this proved a false signal. Since then, the aa values have decreased considerably and the last 12-monthly value is 8.7, centered at May 2009. For solar cycle 24, using aa(min=8.7, the latest prediction is, Rz(max=58.0±25.0.

  1. Spatial Distribution of Hydrologic Ecosystem Service Estimates: Comparing Two Models

    Science.gov (United States)

    Dennedy-Frank, P. J.; Ghile, Y.; Gorelick, S.; Logsdon, R. A.; Chaubey, I.; Ziv, G.

    2014-12-01

    We compare estimates of the spatial distribution of water quantity provided (annual water yield) from two ecohydrologic models: the widely-used Soil and Water Assessment Tool (SWAT) and the much simpler water models from the Integrated Valuation of Ecosystem Services and Tradeoffs (InVEST) toolbox. These two models differ significantly in terms of complexity, timescale of operation, effort, and data required for calibration, and so are often used in different management contexts. We compare two study sites in the US: the Wildcat Creek Watershed (2083 km2) in Indiana, a largely agricultural watershed in a cold aseasonal climate, and the Upper Upatoi Creek Watershed (876 km2) in Georgia, a mostly forested watershed in a temperate aseasonal climate. We evaluate (1) quantitative estimates of water yield to explore how well each model represents this process, and (2) ranked estimates of water yield to indicate how useful the models are for management purposes where other social and financial factors may play significant roles. The SWAT and InVEST models provide very similar estimates of the water yield of individual subbasins in the Wildcat Creek Watershed (Pearson r = 0.92, slope = 0.89), and a similar ranking of the relative water yield of those subbasins (Spearman r = 0.86). However, the two models provide relatively different estimates of the water yield of individual subbasins in the Upper Upatoi Watershed (Pearson r = 0.25, slope = 0.14), and very different ranking of the relative water yield of those subbasins (Spearman r = -0.10). The Upper Upatoi watershed has a significant baseflow contribution due to its sandy, well-drained soils. InVEST's simple seasonality terms, which assume no change in storage over the time of the model run, may not accurately estimate water yield processes when baseflow provides such a strong contribution. Our results suggest that InVEST users take care in situations where storage changes are significant.

  2. Unemployment estimation: Spatial point referenced methods and models

    KAUST Repository

    Pereira, Soraia

    2017-06-26

    Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to

  3. Parameter Estimation in Stochastic Grey-Box Models

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2004-01-01

    An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation. The estimation scheme is based on the extended...... Kalman filter and features maximum likelihood as well as maximum a posteriori estimation on multiple independent data sets, including irregularly sampled data sets and data sets with occasional outliers and missing observations. The software implementation is compared to an existing software tool...... and proves to have better performance both in terms of quality of estimates for nonlinear systems with significant diffusion and in terms of reproducibility. In particular, the new tool provides more accurate and more consistent estimates of the parameters of the diffusion term....

  4. The problematic estimation of "imitation effects" in multilevel models

    Directory of Open Access Journals (Sweden)

    2003-09-01

    Full Text Available It seems plausible that a person's demographic behaviour may be influenced by that among other people in the community, for example because of an inclination to imitate. When estimating multilevel models from clustered individual data, some investigators might perhaps feel tempted to try to capture this effect by simply including on the right-hand side the average of the dependent variable, constructed by aggregation within the clusters. However, such modelling must be avoided. According to simulation experiments based on real fertility data from India, the estimated effect of this obviously endogenous variable can be very different from the true effect. Also the other community effect estimates can be strongly biased. An "imitation effect" can only be estimated under very special assumptions that in practice will be hard to defend.

  5. Radiation dose estimates and hazard evaluations for inhaled airborne radionuclides: Final report

    International Nuclear Information System (INIS)

    Mewhinney, J.A.

    1987-09-01

    The project objective was to conduct confirmatory research on physical chemical characteristics of aerosols produced during manufacture of mixed plutonium and uranium oxide nuclear fuel, to determine the radiation dose distribution in tissues of animals after inhalation exposure to representative aerosols of these materials, and to provide estimates of the relationship of radiation dose and biological response in animals after such inhalation exposure. The first chapter summarizes the physical chemical characterization of samples of aerosols collected from gloveboxes at industrial facilities during normal operations. This chapter provides insights into key aerosol characteristics which are of potential importance in determining the biological fate of specific radionuclides contained in the particulates that would be inhaled by humans following accidental release. The second chapter describes the spatial and temporal distribution of radiation dose in tissues of three species of animals exposed to representative aerosols collected from the industrial facilities. These inhalation studies provide a basis for comparison of the influence of physical chemical form of the inhaled particulates and the variability among species of animal in the radiation dose to tissue. The third chapter details to relationship between radiation dose and biological response in rats exposed to two aerosol forms each at three levels of initial pulmonary burden. This study, conducted over the lifespan of the rats and assuming results to be applicable to humans, indicates that the hazard to health due to inhalation of these industrial aerosols is not different than previously determined for laboratory produced aerosol of PuO 2 . Each chapter is processed separately for the data base

  6. Development on electromagnetic impedance function modeling and its estimation

    Energy Technology Data Exchange (ETDEWEB)

    Sutarno, D., E-mail: Sutarno@fi.itb.ac.id [Earth Physics and Complex System Division Faculty of Mathematics and Natural Sciences Institut Teknologi Bandung (Indonesia)

    2015-09-30

    Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition

  7. Nuclear material inventory estimation in solvent extraction contactors III. Final report for the period 1 January 1988 - 28 February 1989

    International Nuclear Information System (INIS)

    Beyerlein, A.L.; Geldard, J.F.

    1989-03-01

    Simple mathematical models have been developed for estimating the nuclear material inventory in the solvent extraction contactors of nuclear fuel reprocessing facilities from measured nuclear material concentrations in the tanks feeding the purification cycles. The report describes the models and their application to the Eurochemic Reprocessing Plant in Belgium, and the development of a computer simulation program, PUPART, for investigating the effects of process variation as well as measurement error on near-real-time accounting methods. Figs and tabs

  8. A Bayesian framework for parameter estimation in dynamical models.

    Directory of Open Access Journals (Sweden)

    Flávio Codeço Coelho

    Full Text Available Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.

  9. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  10. Advanced empirical estimate of information value for credit scoring models

    Directory of Open Access Journals (Sweden)

    Martin Řezáč

    2011-01-01

    Full Text Available Credit scoring, it is a term for a wide spectrum of predictive models and their underlying techniques that aid financial institutions in granting credits. These methods decide who will get credit, how much credit they should get, and what further strategies will enhance the profitability of the borrowers to the lenders. Many statistical tools are avaiable for measuring quality, within the meaning of the predictive power, of credit scoring models. Because it is impossible to use a scoring model effectively without knowing how good it is, quality indexes like Gini, Kolmogorov-Smirnov statisic and Information value are used to assess quality of given credit scoring model. The paper deals primarily with the Information value, sometimes called divergency. Commonly it is computed by discretisation of data into bins using deciles. One constraint is required to be met in this case. Number of cases have to be nonzero for all bins. If this constraint is not fulfilled there are some practical procedures for preserving finite results. As an alternative method to the empirical estimates one can use the kernel smoothing theory, which allows to estimate unknown densities and consequently, using some numerical method for integration, to estimate value of the Information value. The main contribution of this paper is a proposal and description of the empirical estimate with supervised interval selection. This advanced estimate is based on requirement to have at least k, where k is a positive integer, observations of socres of both good and bad client in each considered interval. A simulation study shows that this estimate outperform both the empirical estimate using deciles and the kernel estimate. Furthermore it shows high dependency on choice of the parameter k. If we choose too small value, we get overestimated value of the Information value, and vice versa. Adjusted square root of number of bad clients seems to be a reasonable compromise.

  11. Perspectives on Modelling BIM-enabled Estimating Practices

    Directory of Open Access Journals (Sweden)

    Willy Sher

    2014-12-01

    Full Text Available BIM-enabled estimating processes do not replace or provide a substitute for the traditional approaches used in the architecture, engineering and construction industries. This paper explores the impact of BIM on these traditional processes.  It identifies differences between the approaches used with BIM and other conventional methods, and between the various construction professionals that prepare estimates. We interviewed 17 construction professionals from client organizations, contracting organizations, consulting practices and specialist-project firms. Our analyses highlight several logical relationships between estimating processes and BIM attributes. Estimators need to respond to the challenges BIM poses to traditional estimating practices. BIM-enabled estimating circumvents long-established conventions and traditional approaches, and focuses on data management.  Consideration needs to be given to the model data required for estimating, to the means by which these data may be harnessed when exported, to the means by which the integrity of model data are protected, to the creation and management of tools that work effectively and efficiently in multi-disciplinary settings, and to approaches that narrow the gap between virtual reality and actual reality.  Areas for future research are also identified in the paper.

  12. Estimation and variable selection for generalized additive partial linear models

    KAUST Repository

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  13. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions...... into the operation of the grid-connected power converters. This paper describes a quasi passive method for estimating the line impedance of the distribution electricity network. The method uses the model based identification technique to obtain the resistive and inductive parts of the line impedance. The quasi...

  14. Model Year 2017 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2016-11-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  15. Model Year 2012 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2011-11-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  16. Model Year 2013 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2012-12-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  17. Model Year 2011 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2010-11-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  18. Model Year 2018 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2017-12-07

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  19. Models of economic geography: dynamics, estimation and policy evaluation

    OpenAIRE

    Knaap, Thijs

    2004-01-01

    In this thesis we look at economic geography models from a number of angles. We started by placing the theory in a context of preceding theories, both earlier work on spatial economics and other children of the monopolistic competition ‘revolution.’ Next, we looked at the theoretical properties of these models, especially when we allow firms to have different demand functions for intermediate goods. We estimated the model using a dataset on US states, and computed a number of counterfactuals....

  20. An adaptive ARX model to estimate the RUL of aluminum plates based on its crack growth

    Science.gov (United States)

    Barraza-Barraza, Diana; Tercero-Gómez, Víctor G.; Beruvides, Mario G.; Limón-Robles, Jorge

    2017-01-01

    A wide variety of Condition-Based Maintenance (CBM) techniques deal with the problem of predicting the time for an asset fault. Most statistical approaches rely on historical failure data that might not be available in several practical situations. To address this issue, practitioners might require the use of self-starting approaches that consider only the available knowledge about the current degradation process and the asset operating context to update the prognostic model. Some authors use Autoregressive (AR) models for this purpose that are adequate when the asset operating context is constant, however, if it is variable, the accuracy of the models can be affected. In this paper, three autoregressive models with exogenous variables (ARX) were constructed, and their capability to estimate the remaining useful life (RUL) of a process was evaluated following the case of the aluminum crack growth problem. An existing stochastic model of aluminum crack growth was implemented and used to assess RUL estimation performance of the proposed ARX models through extensive Monte Carlo simulations. Point and interval estimations were made based only on individual history, behavior, operating conditions and failure thresholds. Both analytic and bootstrapping techniques were used in the estimation process. Finally, by including recursive parameter estimation and a forgetting factor, the ARX methodology adapts to changing operating conditions and maintain the focus on the current degradation level of an asset.

  1. A single model procedure for estimating tank calibration equations

    International Nuclear Information System (INIS)

    Liebetrau, A.M.

    1997-10-01

    A fundamental component of any accountability system for nuclear materials is a tank calibration equation that relates the height of liquid in a tank to its volume. Tank volume calibration equations are typically determined from pairs of height and volume measurements taken in a series of calibration runs. After raw calibration data are standardized to a fixed set of reference conditions, the calibration equation is typically fit by dividing the data into several segments--corresponding to regions in the tank--and independently fitting the data for each segment. The estimates obtained for individual segments must then be combined to obtain an estimate of the entire calibration function. This process is tedious and time-consuming. Moreover, uncertainty estimates may be misleading because it is difficult to properly model run-to-run variability and between-segment correlation. In this paper, the authors describe a model whose parameters can be estimated simultaneously for all segments of the calibration data, thereby eliminating the need for segment-by-segment estimation. The essence of the proposed model is to define a suitable polynomial to fit to each segment and then extend its definition to the domain of the entire calibration function, so that it (the entire calibration function) can be expressed as the sum of these extended polynomials. The model provides defensible estimates of between-run variability and yields a proper treatment of between-segment correlations. A portable software package, called TANCS, has been developed to facilitate the acquisition, standardization, and analysis of tank calibration data. The TANCS package was used for the calculations in an example presented to illustrate the unified modeling approach described in this paper. With TANCS, a trial calibration function can be estimated and evaluated in a matter of minutes

  2. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  3. A distributed approach for parameters estimation in System Biology models

    International Nuclear Information System (INIS)

    Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.

    2009-01-01

    Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.

  4. Correlation between the model accuracy and model-based SOC estimation

    International Nuclear Information System (INIS)

    Wang, Qianqian; Wang, Jiao; Zhao, Pengju; Kang, Jianqiang; Yan, Few; Du, Changqing

    2017-01-01

    State-of-charge (SOC) estimation is a core technology for battery management systems. Considerable progress has been achieved in the study of SOC estimation algorithms, especially the algorithm on the basis of Kalman filter to meet the increasing demand of model-based battery management systems. The Kalman filter weakens the influence of white noise and initial error during SOC estimation but cannot eliminate the existing error of the battery model itself. As such, the accuracy of SOC estimation is directly related to the accuracy of the battery model. Thus far, the quantitative relationship between model accuracy and model-based SOC estimation remains unknown. This study summarizes three equivalent circuit lithium-ion battery models, namely, Thevenin, PNGV, and DP models. The model parameters are identified through hybrid pulse power characterization test. The three models are evaluated, and SOC estimation conducted by EKF-Ah method under three operating conditions are quantitatively studied. The regression and correlation of the standard deviation and normalized RMSE are studied and compared between the model error and the SOC estimation error. These parameters exhibit a strong linear relationship. Results indicate that the model accuracy affects the SOC estimation accuracy mainly in two ways: dispersion of the frequency distribution of the error and the overall level of the error. On the basis of the relationship between model error and SOC estimation error, our study provides a strategy for selecting a suitable cell model to meet the requirements of SOC precision using Kalman filter.

  5. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    Science.gov (United States)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  6. Estimating and Forecasting Generalized Fractional Long Memory Stochastic Volatility Models

    Directory of Open Access Journals (Sweden)

    Shelton Peiris

    2017-12-01

    Full Text Available This paper considers a flexible class of time series models generated by Gegenbauer polynomials incorporating the long memory in stochastic volatility (SV components in order to develop the General Long Memory SV (GLMSV model. We examine the corresponding statistical properties of this model, discuss the spectral likelihood estimation and investigate the finite sample properties via Monte Carlo experiments. We provide empirical evidence by applying the GLMSV model to three exchange rate return series and conjecture that the results of out-of-sample forecasts adequately confirm the use of GLMSV model in certain financial applications.

  7. Model-Based Estimation of Ankle Joint Stiffness.

    Science.gov (United States)

    Misgeld, Berno J E; Zhang, Tony; Lüken, Markus J; Leonhardt, Steffen

    2017-03-29

    We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model's inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements.

  8. A new geometric-based model to accurately estimate arm and leg inertial estimates.

    Science.gov (United States)

    Wicke, Jason; Dumas, Geneviève A

    2014-06-03

    Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. Copyright © 2014. Published by Elsevier Ltd.

  9. Combining Empirical and Stochastic Models for Extreme Floods Estimation

    Science.gov (United States)

    Zemzami, M.; Benaabidate, L.

    2013-12-01

    Hydrological models can be defined as physical, mathematical or empirical. The latter class uses mathematical equations independent of the physical processes involved in the hydrological system. The linear regression and Gradex (Gradient of Extreme values) are classic examples of empirical models. However, conventional empirical models are still used as a tool for hydrological analysis by probabilistic approaches. In many regions in the world, watersheds are not gauged. This is true even in developed countries where the gauging network has continued to decline as a result of the lack of human and financial resources. Indeed, the obvious lack of data in these watersheds makes it impossible to apply some basic empirical models for daily forecast. So we had to find a combination of rainfall-runoff models in which it would be possible to create our own data and use them to estimate the flow. The estimated design floods would be a good choice to illustrate the difficulties facing the hydrologist for the construction of a standard empirical model in basins where hydrological information is rare. The construction of the climate-hydrological model, which is based on frequency analysis, was established to estimate the design flood in the Anseghmir catchments, Morocco. The choice of using this complex model returns to its ability to be applied in watersheds where hydrological information is not sufficient. It was found that this method is a powerful tool for estimating the design flood of the watershed and also other hydrological elements (runoff, volumes of water...).The hydrographic characteristics and climatic parameters were used to estimate the runoff, water volumes and design flood for different return periods.

  10. Biomass models to estimate carbon stocks for hardwood tree species

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz-Peinado, R.; Montero, G.; Rio, M. del

    2012-11-01

    To estimate forest carbon pools from forest inventories it is necessary to have biomass models or biomass expansion factors. In this study, tree biomass models were developed for the main hardwood forest species in Spain: Alnus glutinosa, Castanea sativa, Ceratonia siliqua, Eucalyptus globulus, Fagus sylvatica, Fraxinus angustifolia, Olea europaea var. sylvestris, Populus x euramericana, Quercus canariensis, Quercus faginea, Quercus ilex, Quercus pyrenaica and Quercus suber. Different tree biomass components were considered: stem with bark, branches of different sizes, above and belowground biomass. For each species, a system of equations was fitted using seemingly unrelated regression, fulfilling the additivity property between biomass components. Diameter and total height were explored as independent variables. All models included tree diameter whereas for the majority of species, total height was only considered in the stem biomass models and in some of the branch models. The comparison of the new biomass models with previous models fitted separately for each tree component indicated an improvement in the accuracy of the models. A mean reduction of 20% in the root mean square error and a mean increase in the model efficiency of 7% in comparison with recently published models. So, the fitted models allow estimating more accurately the biomass stock in hardwood species from the Spanish National Forest Inventory data. (Author) 45 refs.

  11. Robust estimation of partially linear models for longitudinal data with dropouts and measurement error.

    Science.gov (United States)

    Qin, Guoyou; Zhang, Jiajia; Zhu, Zhongyi; Fung, Wing

    2016-12-20

    Outliers, measurement error, and missing data are commonly seen in longitudinal data because of its data collection process. However, no method can address all three of these issues simultaneously. This paper focuses on the robust estimation of partially linear models for longitudinal data with dropouts and measurement error. A new robust estimating equation, simultaneously tackling outliers, measurement error, and missingness, is proposed. The asymptotic properties of the proposed estimator are established under some regularity conditions. The proposed method is easy to implement in practice by utilizing the existing standard generalized estimating equations algorithms. The comprehensive simulation studies show the strength of the proposed method in dealing with longitudinal data with all three features. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study and confirms the effectiveness of the intervention in producing weight loss at month 9. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Genomic breeding value estimation using nonparametric additive regression models

    Directory of Open Access Journals (Sweden)

    Solberg Trygve

    2009-01-01

    Full Text Available Abstract Genomic selection refers to the use of genomewide dense markers for breeding value estimation and subsequently for selection. The main challenge of genomic breeding value estimation is the estimation of many effects from a limited number of observations. Bayesian methods have been proposed to successfully cope with these challenges. As an alternative class of models, non- and semiparametric models were recently introduced. The present study investigated the ability of nonparametric additive regression models to predict genomic breeding values. The genotypes were modelled for each marker or pair of flanking markers (i.e. the predictors separately. The nonparametric functions for the predictors were estimated simultaneously using additive model theory, applying a binomial kernel. The optimal degree of smoothing was determined by bootstrapping. A mutation-drift-balance simulation was carried out. The breeding values of the last generation (genotyped was predicted using data from the next last generation (genotyped and phenotyped. The results show moderate to high accuracies of the predicted breeding values. A determination of predictor specific degree of smoothing increased the accuracy.

  13. Global parameter estimation for thermodynamic models of transcriptional regulation.

    Science.gov (United States)

    Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N

    2013-07-15

    Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    Science.gov (United States)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  15. Groundwater Modelling For Recharge Estimation Using Satellite Based Evapotranspiration

    Science.gov (United States)

    Soheili, Mahmoud; (Tom) Rientjes, T. H. M.; (Christiaan) van der Tol, C.

    2017-04-01

    Groundwater movement is influenced by several factors and processes in the hydrological cycle, from which, recharge is of high relevance. Since the amount of aquifer extractable water directly relates to the recharge amount, estimation of recharge is a perquisite of groundwater resources management. Recharge is highly affected by water loss mechanisms the major of which is actual evapotranspiration (ETa). It is, therefore, essential to have detailed assessment of ETa impact on groundwater recharge. The objective of this study was to evaluate how recharge was affected when satellite-based evapotranspiration was used instead of in-situ based ETa in the Salland area, the Netherlands. The Methodology for Interactive Planning for Water Management (MIPWA) model setup which includes a groundwater model for the northern part of the Netherlands was used for recharge estimation. The Surface Energy Balance Algorithm for Land (SEBAL) based actual evapotranspiration maps from Waterschap Groot Salland were also used. Comparison of SEBAL based ETa estimates with in-situ abased estimates in the Netherlands showed that these SEBAL estimates were not reliable. As such results could not serve for calibrating root zone parameters in the CAPSIM model. The annual cumulative ETa map produced by the model showed that the maximum amount of evapotranspiration occurs in mixed forest areas in the northeast and a portion of central parts. Estimates ranged from 579 mm to a minimum of 0 mm in the highest elevated areas with woody vegetation in the southeast of the region. Variations in mean seasonal hydraulic head and groundwater level for each layer showed that the hydraulic gradient follows elevation in the Salland area from southeast (maximum) to northwest (minimum) of the region which depicts the groundwater flow direction. The mean seasonal water balance in CAPSIM part was evaluated to represent recharge estimation in the first layer. The highest recharge estimated flux was for autumn

  16. Dynamic process model of a plutonium oxalate precipitator. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Miller, C.L.; Hammelman, J.E.; Borgonovi, G.M.

    1977-11-01

    In support of LLL material safeguards program, a dynamic process model was developed which simulates the performance of a plutonium (IV) oxalate precipitator. The plutonium oxalate precipitator is a component in the plutonium oxalate process for making plutonium oxide powder from plutonium nitrate. The model is based on state-of-the-art crystallization descriptive equations, the parameters of which are quantified through the use of batch experimental data. The dynamic model predicts performance very similar to general Hanford oxalate process experience. The utilization of such a process model in an actual plant operation could promote both process control and material safeguards control by serving as a baseline predictor which could give early warning of process upsets or material diversion. The model has been incorporated into a FORTRAN computer program and is also compatible with the DYNSYS 2 computer code which is being used at LLL for process modeling efforts.

  17. Dynamic process model of a plutonium oxalate precipitator. Final report

    International Nuclear Information System (INIS)

    Miller, C.L.; Hammelman, J.E.; Borgonovi, G.M.

    1977-11-01

    In support of LLL material safeguards program, a dynamic process model was developed which simulates the performance of a plutonium (IV) oxalate precipitator. The plutonium oxalate precipitator is a component in the plutonium oxalate process for making plutonium oxide powder from plutonium nitrate. The model is based on state-of-the-art crystallization descriptive equations, the parameters of which are quantified through the use of batch experimental data. The dynamic model predicts performance very similar to general Hanford oxalate process experience. The utilization of such a process model in an actual plant operation could promote both process control and material safeguards control by serving as a baseline predictor which could give early warning of process upsets or material diversion. The model has been incorporated into a FORTRAN computer program and is also compatible with the DYNSYS 2 computer code which is being used at LLL for process modeling efforts

  18. Model-Based Estimation of Ankle Joint Stiffness

    Directory of Open Access Journals (Sweden)

    Berno J. E. Misgeld

    2017-03-01

    Full Text Available We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model’s inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements.

  19. Model-Based Estimation of Ankle Joint Stiffness

    Science.gov (United States)

    Misgeld, Berno J. E.; Zhang, Tony; Lüken, Markus J.; Leonhardt, Steffen

    2017-01-01

    We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model’s inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements. PMID:28353683

  20. Motion estimation by data assimilation in reduced dynamic models

    International Nuclear Information System (INIS)

    Drifi, Karim

    2013-01-01

    Motion estimation is a major challenge in the field of image sequence analysis. This thesis is a study of the dynamics of geophysical flows visualized by satellite imagery. Satellite image sequences are currently underused for the task of motion estimation. A good understanding of geophysical flows allows a better analysis and forecast of phenomena in domains such as oceanography and meteorology. Data assimilation provides an excellent framework for achieving a compromise between heterogeneous data, especially numerical models and observations. Hence, in this thesis we set out to apply variational data assimilation methods to estimate motion on image sequences. As one of the major drawbacks of applying these assimilation techniques is the considerable computation time and memory required, we therefore define and use a model reduction method in order to significantly decrease the necessary computation time and the memory. We then explore the possibilities that reduced models provide for motion estimation, particularly the possibility of strictly imposing some known constraints on the computed solutions. In particular, we show how to estimate a divergence free motion with boundary conditions on a complex spatial domain [fr

  1. Estimating Drilling Cost and Duration Using Copulas Dependencies Models

    Directory of Open Access Journals (Sweden)

    M. Al Kindi

    2017-03-01

    Full Text Available Estimation of drilling budget and duration is a high-level challenge for oil and gas industry. This is due to the many uncertain activities in the drilling procedure such as material prices, overhead cost, inflation, oil prices, well type, and depth of drilling. Therefore, it is essential to consider all these uncertain variables and the nature of relationships between them. This eventually leads into the minimization of the level of uncertainty and yet makes a "good" estimation points for budget and duration given the well type. In this paper, the copula probability theory is used in order to model the dependencies between cost/duration and MRI (mechanical risk index. The MRI is a mathematical computation, which relates various drilling factors such as: water depth, measured depth, true vertical depth in addition to mud weight and horizontal displacement. In general, the value of MRI is utilized as an input for the drilling cost and duration estimations. Therefore, modeling the uncertain dependencies between MRI and both cost and duration using copulas is important. The cost and duration estimates for each well were extracted from the copula dependency model where research study simulate over 10,000 scenarios. These new estimates were later compared to the actual data in order to validate the performance of the procedure. Most of the wells show moderate - weak relationship of MRI dependence, which means that the variation in these wells can be related to MRI but to the extent that it is not the primary source.

  2. Models for estimating photosynthesis parameters from in situ production profiles

    Science.gov (United States)

    Kovač, Žarko; Platt, Trevor; Sathyendranath, Shubha; Antunović, Suzana

    2017-12-01

    The rate of carbon assimilation in phytoplankton primary production models is mathematically prescribed with photosynthesis irradiance functions, which convert a light flux (energy) into a material flux (carbon). Information on this rate is contained in photosynthesis parameters: the initial slope and the assimilation number. The exactness of parameter values is crucial for precise calculation of primary production. Here we use a model of the daily production profile based on a suite of photosynthesis irradiance functions and extract photosynthesis parameters from in situ measured daily production profiles at the Hawaii Ocean Time-series station Aloha. For each function we recover parameter values, establish parameter distributions and quantify model skill. We observe that the choice of the photosynthesis irradiance function to estimate the photosynthesis parameters affects the magnitudes of parameter values as recovered from in situ profiles. We also tackle the problem of parameter exchange amongst the models and the effect it has on model performance. All models displayed little or no bias prior to parameter exchange, but significant bias following parameter exchange. The best model performance resulted from using optimal parameter values. Model formulation was extended further by accounting for spectral effects and deriving a spectral analytical solution for the daily production profile. The daily production profile was also formulated with time dependent growing biomass governed by a growth equation. The work on parameter recovery was further extended by exploring how to extract photosynthesis parameters from information on watercolumn production. It was demonstrated how to estimate parameter values based on a linearization of the full analytical solution for normalized watercolumn production and from the solution itself, without linearization. The paper complements previous works on photosynthesis irradiance models by analysing the skill and consistency of

  3. The Impact of Statistical Leakage Models on Design Yield Estimation

    Directory of Open Access Journals (Sweden)

    Rouwaida Kanj

    2011-01-01

    Full Text Available Device mismatch and process variation models play a key role in determining the functionality and yield of sub-100 nm design. Average characteristics are often of interest, such as the average leakage current or the average read delay. However, detecting rare functional fails is critical for memory design and designers often seek techniques that enable accurately modeling such events. Extremely leaky devices can inflict functionality fails. The plurality of leaky devices on a bitline increase the dimensionality of the yield estimation problem. Simplified models are possible by adopting approximations to the underlying sum of lognormals. The implications of such approximations on tail probabilities may in turn bias the yield estimate. We review different closed form approximations and compare against the CDF matching method, which is shown to be most effective method for accurate statistical leakage modeling.

  4. Efficient estimation of feedback effects with application to climate models

    International Nuclear Information System (INIS)

    Cacugi, D.G.; Hall, M.C.G.

    1984-01-01

    This work presents an efficient method for calculating the sensitivity of a mathematical model's result to feedback. Feedback is defined in terms of an operator acting on the model's dependent variables. The sensitivity to feedback is defined as a functional derivative, and a method is presented to evaluate this derivative using adjoint functions. Typically, this method allows the individual effect of many different feedbacks to be estimated with a total additional computing time comparable to only one recalculation. The effects on a CO 2 -doubling experiment of actually incorporating surface albedo and water vapor feedbacks in radiative-convective model are compared with sensivities calculated using adjoint functions. These sensitivities predict the actual effects of feedback with at least the correct sign and order of magnitude. It is anticipated that this method of estimation the effect of feedback will be useful for more complex models where extensive recalculations for each of a variety of different feedbacks is impractical

  5. Working covariance model selection for generalized estimating equations.

    Science.gov (United States)

    Carey, Vincent J; Wang, You-Gan

    2011-11-20

    We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.

  6. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    Science.gov (United States)

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  7. An integrative modeling approach for the efficient estimation of cross sectional tibial stresses during locomotion.

    Science.gov (United States)

    Derrick, Timothy R; Edwards, W Brent; Fellin, Rebecca E; Seay, Joseph F

    2016-02-08

    The purpose of this research was to utilize a series of models to estimate the stress in a cross section of the tibia, located 62% from the proximal end, during walking. Twenty-eight male, active duty soldiers walked on an instrumented treadmill while external force data and kinematics were recorded. A rigid body model was used to estimate joint moments and reaction forces. A musculoskeletal model was used to gather muscle length, muscle velocity, moment arm and orientation information. Optimization procedures were used to estimate muscle forces and finally internal bone forces and moments were applied to an inhomogeneous, subject specific bone model obtained from CT scans to estimate stress in the bone cross section. Validity was assessed by comparison to stresses calculated from strain gage data in the literature and sensitivity was investigated using two simplified versions of the bone model-a homogeneous model and an ellipse approximation. Peak compressive stress occurred on the posterior aspect of the cross section (-47.5 ± 14.9 MPa). Peak tensile stress occurred on the anterior aspect (27.0 ± 11.7 MPa) while the location of peak shear was variable between subjects (7.2 ± 2.4 MPa). Peak compressive, tensile and shear stresses were within 0.52 MPa, 0.36 MPa and 3.02 MPa respectively of those calculated from the converted strain gage data. Peak values from a inhomogeneous model of the bone correlated well with homogeneous model (normal: 0.99; shear: 0.94) as did the normal ellipse model (r=0.89-0.96). However, the relationship between shear stress in the inhomogeneous model and ellipse model was less accurate (r=0.64). The procedures detailed in this paper provide a non-invasive and relatively quick method of estimating cross sectional stress that holds promise for assessing injury and osteogenic stimulus in bone during normal physical activity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Mathematical Models of Elementary Mathematics Learning and Performance. Final Report.

    Science.gov (United States)

    Suppes, Patrick

    This project was concerned with the development of mathematical models of elementary mathematics learning and performance. Probabilistic finite automata and register machines with a finite number of registers were developed as models and extensively tested with data arising from the elementary-mathematics strand curriculum developed by the…

  9. Regional forecasting with global atmospheric models; Final report

    Energy Technology Data Exchange (ETDEWEB)

    Crowley, T.J.; Smith, N.R. [Applied Research Corp., College Station, TX (United States)

    1994-05-01

    The purpose of the project was to conduct model simulations for past and future climate change with respect to the proposed Yucca Mtn. repository. The authors report on three main topics, one of which is boundary conditions for paleo-hindcast studies. These conditions are necessary for the conduction of three to four model simulations. The boundary conditions have been prepared for future runs. The second topic is (a) comparing the atmospheric general circulation model (GCM) with observations and other GCMs; and (b) development of a better precipitation data base for the Yucca Mtn. region for comparisons with models. These tasks have been completed. The third topic is preliminary assessments of future climate change. Energy balance model (EBM) simulations suggest that the greenhouse effect will likely dominate climate change at Yucca Mtn. for the next 10,000 years. The EBM study should improve rational choice of GCM CO{sub 2} scenarios for future climate change.

  10. Comparison of physically based catchment models for estimating Phosphorus losses

    OpenAIRE

    Nasr, Ahmed Elssidig; Bruen, Michael

    2003-01-01

    As part of a large EPA-funded research project, coordinated by TEAGASC, the Centre for Water Resources Research at UCD reviewed the available distributed physically based catchment models with a potential for use in estimating phosphorous losses for use in implementing the Water Framework Directive. Three models, representative of different levels of approach and complexity, were chosen and were implemented for a number of Irish catchments. This paper reports on (i) the lessons and experience...

  11. A model-based approach to estimating forest area

    Science.gov (United States)

    Ronald E. McRoberts

    2006-01-01

    A logistic regression model based on forest inventory plot data and transformations of Landsat Thematic Mapper satellite imagery was used to predict the probability of forest for 15 study areas in Indiana, USA, and 15 in Minnesota, USA. Within each study area, model-based estimates of forest area were obtained for circular areas with radii of 5 km, 10 km, and 15 km and...

  12. Hybrid Simulation Modeling to Estimate U.S. Energy Elasticities

    Science.gov (United States)

    Baylin-Stern, Adam C.

    This paper demonstrates how an U.S. application of CIMS, a technologically explicit and behaviourally realistic energy-economy simulation model which includes macro-economic feedbacks, can be used to derive estimates of elasticity of substitution (ESUB) and autonomous energy efficiency index (AEEI) parameters. The ability of economies to reduce greenhouse gas emissions depends on the potential for households and industry to decrease overall energy usage, and move from higher to lower emissions fuels. Energy economists commonly refer to ESUB estimates to understand the degree of responsiveness of various sectors of an economy, and use estimates to inform computable general equilibrium models used to study climate policies. Using CIMS, I have generated a set of future, 'pseudo-data' based on a series of simulations in which I vary energy and capital input prices over a wide range. I then used this data set to estimate the parameters for transcendental logarithmic production functions using regression techniques. From the production function parameter estimates, I calculated an array of elasticity of substitution values between input pairs. Additionally, this paper demonstrates how CIMS can be used to calculate price-independent changes in energy-efficiency in the form of the AEEI, by comparing energy consumption between technologically frozen and 'business as usual' simulations. The paper concludes with some ideas for model and methodological improvement, and how these might figure into future work in the estimation of ESUBs from CIMS. Keywords: Elasticity of substitution; hybrid energy-economy model; translog; autonomous energy efficiency index; rebound effect; fuel switching.

  13. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Rust, John; Schjerning, Bertel

    2015-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). They used an inefficient version of the nested fixed point algorithm that relies on successive app...

  14. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Jinhyuk, Lee; Rust, John

    2016-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). Their implementation of the nested fixed point algorithm used successive approximations to solve t...

  15. Parameter estimation in stochastic mammogram model by heuristic optimization techniques.

    NARCIS (Netherlands)

    Selvan, S.E.; Xavier, C.C.; Karssemeijer, N.; Sequeira, J.; Cherian, R.A.; Dhala, B.Y.

    2006-01-01

    The appearance of disproportionately large amounts of high-density breast parenchyma in mammograms has been found to be a strong indicator of the risk of developing breast cancer. Hence, the breast density model is popular for risk estimation or for monitoring breast density change in prevention or

  16. A general predictive model for estimating monthly ecosystem evapotranspiration

    Science.gov (United States)

    Ge Sun; Karrin Alstad; Jiquan Chen; Shiping Chen; Chelcy R. Ford; al. et.

    2011-01-01

    Accurately quantifying evapotranspiration (ET) is essential for modelling regional-scale ecosystem water balances. This study assembled an ET data set estimated from eddy flux and sapflow measurements for 13 ecosystems across a large climatic and management gradient from the United States, China, and Australia. Our objectives were to determine the relationships among...

  17. Determining input values for a simple parametric model to estimate ...

    African Journals Online (AJOL)

    Estimating soil evaporation (Es) is an important part of modelling vineyard evapotranspiration for irrigation purposes. Furthermore, quantification of possible soil texture and trellis effects is essential. Daily Es from six topsoils packed into lysimeters was measured under grapevines on slanting and vertical trellises, ...

  18. Revised models and genetic parameter estimates for production and ...

    African Journals Online (AJOL)

    Genetic parameters for production and reproduction traits in the Elsenburg Dormer sheep stud were estimated using records of 11743 lambs born between 1943 and 2002. An animal model with direct and maternal additive, maternal permanent and temporary environmental effects was fitted for traits considered traits of the ...

  19. Revaluating the Tanzi-Model to Estimate the Underground Economy

    NARCIS (Netherlands)

    Ferwerda, J.; Deleanu, I.; Unger, B.

    Since the early 1980s, the interest in the nature and size of the non-measured economy (both the informal and the illegal one) was born among researchers in the US. Since then, several models to estimate the shadow and/or the underground economy appeared in the literature, each with its own

  20. Bayesian nonparametric estimation of hazard rate in monotone Aalen model

    Czech Academy of Sciences Publication Activity Database

    Timková, Jana

    2014-01-01

    Roč. 50, č. 6 (2014), s. 849-868 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Aalen model * Bayesian estimation * MCMC Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/SI/timkova-0438210.pdf

  1. Models for the analytic estimation of low energy photon albedo

    International Nuclear Information System (INIS)

    Simovic, R.; Markovic, S.; Ljubenov, V.

    2005-01-01

    This paper shows some monoenergetic models for estimation of photon reflection in the energy range from 20 keV to 80 keV. Using the DP0 approximation of the H-function we have derived the analytic expressions of the η and R functions in purpose to facilitate photon reflection analyses as well as the radiation shield designee. (author) [sr

  2. Empirical Models for the Estimation of Global Solar Radiation in ...

    African Journals Online (AJOL)

    Empirical Models for the Estimation of Global Solar Radiation in Yola, Nigeria. ... and average daily wind speed (WS) for the interval of three years (2010 – 2012) measured using various instruments for Yola of recorded data collected from the Center for Atmospheric Research (CAR), Anyigba are presented and analyzed.

  3. Remote sensing estimates of impervious surfaces for pluvial flood modelling

    DEFF Research Database (Denmark)

    Kaspersen, Per Skougaard; Drews, Martin

    This paper investigates the accuracy of medium resolution (MR) satellite imagery in estimating impervious surfaces for European cities at the detail required for pluvial flood modelling. Using remote sensing techniques enables precise and systematic quantification of the influence of the past 30...

  4. Battery electric vehicle energy consumption modelling for range estimation

    NARCIS (Netherlands)

    Wang, J.; Besselink, I.J.M.; Nijmeijer, H.

    2017-01-01

    Range anxiety is considered as one of the major barriers to the mass adoption of battery electric vehicles (BEVs). One method to solve this problem is to provide accurate range estimation to the driver. This paper describes a vehicle energy consumption model considering the influence of weather

  5. Review Genetic prediction models and heritability estimates for ...

    African Journals Online (AJOL)

    edward

    2015-05-09

    May 9, 2015 ... Heritability estimates for functional longevity have been expressed on an original or a logarithmic scale with PH models. Ducrocq & Casella (1996) defined heritability on a logarithmic scale and modified under simulation to incorporate the tri-gamma function (γ) as used by Sasaki et al. (2012) and Terawaki ...

  6. Mathematical models for estimating radio channels utilization when ...

    African Journals Online (AJOL)

    Definition of the radio channel utilization indicator is given. Mathematical models for radio channels utilization assessment by real-time flows transfer in the wireless self-organized network are presented. Estimated experiments results according to the average radio channel utilization productivity with and without buffering of ...

  7. Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2010-01-01

    Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficul...

  8. Model-based state estimator for an intelligent tire

    NARCIS (Netherlands)

    Goos, J.; Teerhuis, A. P.; Schmeitz, A. J.C.; Besselink, I.; Nijmeijer, H.

    2017-01-01

    In this work a Tire State Estimator (TSE) is developed and validated using data from a tri-axial accelerometer, installed at the inner liner of the tire. The Flexible Ring Tire (FRT) model is proposed to calculate the tire deformation. For a rolling tire, this deformation is transformed into

  9. Model-based State Estimator for an Intelligent Tire

    NARCIS (Netherlands)

    Goos, J.; Teerhuis, A.P.; Schmeitz, A.J.C.; Besselink, I.J.M.; Nijmeijer, H.

    2016-01-01

    In this work a Tire State Estimator (TSE) is developed and validated using data from a tri-axial accelerometer, installed at the inner liner of the tire. The Flexible Ring Tire (FRT) model is proposed to calculate the tire deformation. For a rolling tire, this deformation is transformed into

  10. Temporal validation for landsat-based volume estimation model

    Science.gov (United States)

    Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan

    2015-01-01

    Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...

  11. Depth Compensation Model for Gaze Estimation in Sport Analysis

    DEFF Research Database (Denmark)

    Batista Narcizo, Fabricio; Hansen, Dan Witzner

    2015-01-01

    is tested in a totally controlled environment with aim to check the influences of eye tracker parameters and ocular biometric parameters on its behavior. We also present a gaze estimation method based on epipolar geometry for binocular eye tracking setups. The depth compensation model has shown very...

  12. Models for estimation of carbon sequestered by Cupressus ...

    African Journals Online (AJOL)

    This study compared models for estimating carbon sequestered aboveground in Cupressus lusitanica plantation stands at Wondo Genet College of Forestry and Natural Resources, Ethiopia. Relationships of carbon storage with tree component and stand age were also investigated. Thirty trees of three different ages (5, ...

  13. An improved COCOMO software cost estimation model | Duke ...

    African Journals Online (AJOL)

    In this paper, we discuss the methodologies adopted previously in software cost estimation using the COnstructive COst MOdels (COCOMOs). From our analysis, COCOMOs produce very high software development efforts, which eventually produce high software development costs. Consequently, we propose its extension, ...

  14. An Approach to Quality Estimation in Model-Based Development

    DEFF Research Database (Denmark)

    Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter

    2004-01-01

    We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...

  15. An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S. K.; Kang, G. B.; Ko, W. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole.

  16. An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation

    International Nuclear Information System (INIS)

    Kim, S. K.; Kang, G. B.; Ko, W. I.

    2013-01-01

    Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole

  17. Improving Frozen Precipitation Density Estimation in Land Surface Modeling

    Science.gov (United States)

    Sparrow, K.; Fall, G. M.

    2017-12-01

    The Office of Water Prediction (OWP) produces high-value water supply and flood risk planning information through the use of operational land surface modeling. Improvements in diagnosing frozen precipitation density will benefit the NWS's meteorological and hydrological services by refining estimates of a significant and vital input into land surface models. A current common practice for handling the density of snow accumulation in a land surface model is to use a standard 10:1 snow-to-liquid-equivalent ratio (SLR). Our research findings suggest the possibility of a more skillful approach for assessing the spatial variability of precipitation density. We developed a 30-year SLR climatology for the coterminous US from version 3.22 of the Daily Global Historical Climatology Network - Daily (GHCN-D) dataset. Our methods followed the approach described by Baxter (2005) to estimate mean climatological SLR values at GHCN-D sites in the US, Canada, and Mexico for the years 1986-2015. In addition to the Baxter criteria, the following refinements were made: tests were performed to eliminate SLR outliers and frequent reports of SLR = 10, a linear SLR vs. elevation trend was fitted to station SLR mean values to remove the elevation trend from the data, and detrended SLR residuals were interpolated using ordinary kriging with a spherical semivariogram model. The elevation values of each station were based on the GMTED 2010 digital elevation model and the elevation trend in the data was established via linear least squares approximation. The ordinary kriging procedure was used to interpolate the data into gridded climatological SLR estimates for each calendar month at a 0.125 degree resolution. To assess the skill of this climatology, we compared estimates from our SLR climatology with observations from the GHCN-D dataset to consider the potential use of this climatology as a first guess of frozen precipitation density in an operational land surface model. The difference in

  18. Cost estimation model for advanced planetary programs, fourth edition

    Science.gov (United States)

    Spadoni, D. J.

    1983-01-01

    The development of the planetary program cost model is discussed. The Model was updated to incorporate cost data from the most recent US planetary flight projects and extensively revised to more accurately capture the information in the historical cost data base. This data base is comprised of the historical cost data for 13 unmanned lunar and planetary flight programs. The revision was made with a two fold objective: to increase the flexibility of the model in its ability to deal with the broad scope of scenarios under consideration for future missions, and to maintain and possibly improve upon the confidence in the model's capabilities with an expected accuracy of 20%. The Model development included a labor/cost proxy analysis, selection of the functional forms of the estimating relationships, and test statistics. An analysis of the Model is discussed and two sample applications of the cost model are presented.

  19. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  20. Eigenspace perturbations for structural uncertainty estimation of turbulence closure models

    Science.gov (United States)

    Jofre, Lluis; Mishra, Aashwin; Iaccarino, Gianluca

    2017-11-01

    With the present state of computational resources, a purely numerical resolution of turbulent flows encountered in engineering applications is not viable. Consequently, investigations into turbulence rely on various degrees of modeling. Archetypal amongst these variable resolution approaches would be RANS models in two-equation closures, and subgrid-scale models in LES. However, owing to the simplifications introduced during model formulation, the fidelity of all such models is limited, and therefore the explicit quantification of the predictive uncertainty is essential. In such scenario, the ideal uncertainty estimation procedure must be agnostic to modeling resolution, methodology, and the nature or level of the model filter. The procedure should be able to give reliable prediction intervals for different Quantities of Interest, over varied flows and flow conditions, and at diametric levels of modeling resolution. In this talk, we present and substantiate the Eigenspace perturbation framework as an uncertainty estimation paradigm that meets these criteria. Commencing from a broad overview, we outline the details of this framework at different modeling resolution. Thence, using benchmark flows, along with engineering problems, the efficacy of this procedure is established. This research was partially supported by NNSA under the Predictive Science Academic Alliance Program (PSAAP) II, and by DARPA under the Enabling Quantification of Uncertainty in Physical Systems (EQUiPS) project (technical monitor: Dr Fariba Fahroo).

  1. Estimating Co-Contraction Activation of Trunk Muscles Using a Novel Musculoskeletal Model for Pregnant Women

    Directory of Open Access Journals (Sweden)

    Saori Morino

    2017-10-01

    Full Text Available Weight gain and stretched abdominal muscles from an enlarged gravid uterus are remarkable features during pregnancy. These changes elicit postural instability and place strain on body segments, contributing to lower back pain. In general, the agonist and antagonist muscles act simultaneously to increase joint stabilization; however, this can cause additional muscle stress during movement. Furthermore, this activation can be observed in pregnant women because of their unstable body joints. Hence, physical modalities based on assessments of muscle activation are useful for managing low back pain during pregnancy. Musculoskeletal models are common when investigating muscle load. However, it is difficult to apply such models to pregnant women and estimate the co-contraction of muscles using musculoskeletal models. Therefore, the purpose of this study is to construct a musculoskeletal model for pregnant women that estimates the co-contraction of trunk muscles. First, motion analysis was conducted on a pregnant woman and the muscle activations of the rectus abdominis and erector spinae were measured. Then, the musculoskeletal model was specifically modified for pregnant women. Finally, the co-contraction was estimated from the results of the musculoskeletal model and electromyography data using a genetic algorithm. With the proposed methods, weakened abdominal muscle torque and the co-contraction activation of trunk muscles were estimated successfully.

  2. Predictive Software Cost Model Study. Volume I. Final Technical Report.

    Science.gov (United States)

    1980-06-01

    development phase to identify computer resources necessary to support computer programs after transfer of program manangement responsibility and system... classical model development with refinements specifically applicable to avionics systems. The refinements are the result of the Phase I literature search

  3. Robust and efficient parameter estimation in dynamic models of biological systems.

    Science.gov (United States)

    Gábor, Attila; Banga, Julio R

    2015-10-29

    Dynamic modelling provides a systematic framework to understand function in biological systems. Parameter estimation in nonlinear dynamic models remains a very challenging inverse problem due to its nonconvexity and ill-conditioning. Associated issues like overfitting and local solutions are usually not properly addressed in the systems biology literature despite their importance. Here we present a method for robust and efficient parameter estimation which uses two main strategies to surmount the aforementioned difficulties: (i) efficient global optimization to deal with nonconvexity, and (ii) proper regularization methods to handle ill-conditioning. In the case of regularization, we present a detailed critical comparison of methods and guidelines for properly tuning them. Further, we show how regularized estimations ensure the best trade-offs between bias and variance, reducing overfitting, and allowing the incorporation of prior knowledge in a systematic way. We illustrate the performance of the presented method with seven case studies of different nature and increasing complexity, considering several scenarios of data availability, measurement noise and prior knowledge. We show how our method ensures improved estimations with faster and more stable convergence. We also show how the calibrated models are more generalizable. Finally, we give a set of simple guidelines to apply this strategy to a wide variety of calibration problems. Here we provide a parameter estimation strategy which combines efficient global optimization with a regularization scheme. This method is able to calibrate dynamic models in an efficient and robust way, effectively fighting overfitting and allowing the incorporation of prior information.

  4. Parameter and state estimation in a Neisseria meningitidis model: A study case of Niger

    Science.gov (United States)

    Bowong, S.; Mountaga, L.; Bah, A.; Tewa, J. J.; Kurths, J.

    2016-12-01

    Neisseria meningitidis (Nm) is a major cause of bacterial meningitidis outbreaks in Africa and the Middle East. The availability of yearly reported meningitis cases in the African meningitis belt offers the opportunity to analyze the transmission dynamics and the impact of control strategies. In this paper, we propose a method for the estimation of state variables that are not accessible to measurements and an unknown parameter in a Nm model. We suppose that the yearly number of Nm induced mortality and the total population are known inputs, which can be obtained from data, and the yearly number of new Nm cases is the model output. We also suppose that the Nm transmission rate is an unknown parameter. We first show how the recruitment rate into the population can be estimated using real data of the total population and Nm induced mortality. Then, we use an auxiliary system called observer whose solutions converge exponentially to those of the original model. This observer does not use the unknown infection transmission rate but only uses the known inputs and the model output. This allows us to estimate unmeasured state variables such as the number of carriers that play an important role in the transmission of the infection and the total number of infected individuals within a human community. Finally, we also provide a simple method to estimate the unknown Nm transmission rate. In order to validate the estimation results, numerical simulations are conducted using real data of Niger.

  5. Modeling future power plant location patterns. Final report

    International Nuclear Information System (INIS)

    Eagles, T.W.; Cohon, J.L.; ReVelle, C.

    1979-04-01

    The locations of future energy facilities must be specified to assess the potential environmental impact of those facilities. A computer model was developed to generate probable locations for the energy facilities needed to meet postulated future energy requirements. The model is designed to cover a very large geographical region. The regional demand for baseload electric generating capacity associated with a postulated demand growth rate over any desired time horizon is specified by the user as an input to the model. The model uses linear programming to select the most probable locations within the region, based on physical and political factors. The linear program is multi-objective, with four objective functions based on transmission, coal supply, population proximity, and water supply considerations. Minimizing each objective function leads to a distinct set of locations. The user can select the objective function or weighted combination of objective functions most appropriate to his interest. Users with disparate interests can use the model to see the locational changes which result from varying weighting of the objective functions. The model has been implemented in a six-state mid-Atlantic region. The year 2000 was chosen as the study year, and a test scenario postulating 2.25% growth in baseload generating capacity between 1977 and 2000 was chosen. The scenario stipulatedthat this capacity be 50% nuclear and 50% coal-fired. Initial utility reaction indicates the objective based on transmission costs is most important for such a large-scale analysis

  6. Bayes estimation of the general hazard rate model

    International Nuclear Information System (INIS)

    Sarhan, A.

    1999-01-01

    In reliability theory and life testing models, the life time distributions are often specified by choosing a relevant hazard rate function. Here a general hazard rate function h(t)=a+bt c-1 , where c, a, b are constants greater than zero, is considered. The parameter c is assumed to be known. The Bayes estimators of (a,b) based on the data of type II/item-censored testing without replacement are obtained. A large simulation study using Monte Carlo Method is done to compare the performance of Bayes with regression estimators of (a,b). The criterion for comparison is made based on the Bayes risk associated with the respective estimator. Also, the influence of the number of failed items on the accuracy of the estimators (Bayes and regression) is investigated. Estimations for the parameters (a,b) of the linearly increasing hazard rate model h(t)=a+bt, where a, b are greater than zero, can be obtained as the special case, letting c=2

  7. Alligator Rivers Analogue project. Hydrogeological modelling. Final Report - Volume 6

    International Nuclear Information System (INIS)

    Townley, L.R.; Trefry, M.G.; Barr, A.D.; Braumiller, S.

    1992-01-01

    This volume describes hydrogeological modelling carried out as part of the Alligator Rivers Analogue Project. Hydrogeology has played a key integrating role in the Project, largely because water movement is believed to have controlled the evolution of the Koongarra uranium Orebody and therefore affects field observations of all types at all scales. Aquifer testing described uses the concept of transmissivity in its interpretation of aquifer response to pumping. The concept of an aquifer, a layer transmitting significant quantities of water in a mainly horizontal direction, seems hard to accept in an environment as heterogeneous as that at Koongarra. But modelling of aquifers both in one dimension and two dimensionally in plan has contributed significantly to our understanding of the site. A one-dimensional model with three layers (often described as a quasi two dimensional model) was applied to flow between the Fault and Koongarra Creek. Being a transient model, this model was able to show that reverse flows can indeed occur back towards the Fault, but only if there is distributed recharge over the orebody as well as a mechanism for the Fault, or a region near the Fault, to remove water from the simulated cross-section. The model also showed clearly that the response of the three-layered system, consisting of a highly weathered zone, a fractured transmissive zone and a less conductive lower schist zone, is governed mainly by the transmissivity and storage coefficient of the middle layer. The storage coefficient of the higher layer has little effect. A two-dimensional model in plan used a description of anisotropy to show that reverse flows can also occur even without a conducting Fault. Modelling of a three-dimensional region using discrete fractures showed that it is certainly possible to simulate systems like that observed at Koongarra, but that large amounts of data are probably needed to obtain realistic descriptions of the fracture networks. Inverse modelling

  8. Alligator Rivers Analogue project. Hydrogeological modelling. Final Report - Volume 6

    Energy Technology Data Exchange (ETDEWEB)

    Townley, L R; Trefry, M G; Barr, A D [CSIRO Div of Water Resources, PO Wembley, WA (Australia); Braumiller, S [Univ of Arizona, Tucson, AZ (United States). Dept of Hydrology and Water Resources; Kawanishi, M [Central Research Institute of Electric Power Industry, Abiko-Shi, Chiba-Ken (Japan); and others

    1993-12-31

    This volume describes hydrogeological modelling carried out as part of the Alligator Rivers Analogue Project. Hydrogeology has played a key integrating role in the Project, largely because water movement is believed to have controlled the evolution of the Koongarra uranium Orebody and therefore affects field observations of all types at all scales. Aquifer testing described uses the concept of transmissivity in its interpretation of aquifer response to pumping. The concept of an aquifer, a layer transmitting significant quantities of water in a mainly horizontal direction, seems hard to accept in an environment as heterogeneous as that at Koongarra. But modelling of aquifers both in one dimension and two dimensionally in plan has contributed significantly to our understanding of the site. A one-dimensional model with three layers (often described as a quasi two dimensional model) was applied to flow between the Fault and Koongarra Creek. Being a transient model, this model was able to show that reverse flows can indeed occur back towards the Fault, but only if there is distributed recharge over the orebody as well as a mechanism for the Fault, or a region near the Fault, to remove water from the simulated cross-section. The model also showed clearly that the response of the three-layered system, consisting of a highly weathered zone, a fractured transmissive zone and a less conductive lower schist zone, is governed mainly by the transmissivity and storage coefficient of the middle layer. The storage coefficient of the higher layer has little effect. A two-dimensional model in plan used a description of anisotropy to show that reverse flows can also occur even without a conducting Fault. Modelling of a three-dimensional region using discrete fractures showed that it is certainly possible to simulate systems like that observed at Koongarra, but that large amounts of data are probably needed to obtain realistic descriptions of the fracture networks. Inverse modelling

  9. System health monitoring using multiple-model adaptive estimation techniques

    Science.gov (United States)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  10. Gridded rainfall estimation for distributed modeling in western mountainous areas

    Science.gov (United States)

    Moreda, F.; Cong, S.; Schaake, J.; Smith, M.

    2006-05-01

    Estimation of precipitation in mountainous areas continues to be problematic. It is well known that radar-based methods are limited due to beam blockage. In these areas, in order to run a distributed model that accounts for spatially variable precipitation, we have generated hourly gridded rainfall estimates from gauge observations. These estimates will be used as basic data sets to support the second phase of the NWS-sponsored Distributed Hydrologic Model Intercomparison Project (DMIP 2). One of the major foci of DMIP 2 is to better understand the modeling and data issues in western mountainous areas in order to provide better water resources products and services to the Nation. We derive precipitation estimates using three data sources for the period of 1987-2002: 1) hourly cooperative observer (coop) gauges, 2) daily total coop gauges and 3) SNOw pack TELemetry (SNOTEL) daily gauges. The daily values are disaggregated using the hourly gauge values and then interpolated to approximately 4km grids using an inverse-distance method. Following this, the estimates are adjusted to match monthly mean values from the Parameter-elevation Regressions on Independent Slopes Model (PRISM). Several analyses are performed to evaluate the gridded estimates for DMIP 2 experiments. These gridded inputs are used to generate mean areal precipitation (MAPX) time series for comparison to the traditional mean areal precipitation (MAP) time series derived by the NWS' California-Nevada River Forecast Center for model calibration. We use two of the DMIP 2 basins in California and Nevada: the North Fork of the American River (catchment area 885 sq. km) and the East Fork of the Carson River (catchment area 922 sq. km) as test areas. The basins are sub-divided into elevation zones. The North Fork American basin is divided into two zones above and below an elevation threshold. Likewise, the Carson River basin is subdivided in to four zones. For each zone, the analyses include: a) overall

  11. Evaluation of black carbon estimations in global aerosol models

    Directory of Open Access Journals (Sweden)

    Y. Zhao

    2009-11-01

    Full Text Available We evaluate black carbon (BC model predictions from the AeroCom model intercomparison project by considering the diversity among year 2000 model simulations and comparing model predictions with available measurements. These model-measurement intercomparisons include BC surface and aircraft concentrations, aerosol absorption optical depth (AAOD retrievals from AERONET and Ozone Monitoring Instrument (OMI and BC column estimations based on AERONET. In regions other than Asia, most models are biased high compared to surface concentration measurements. However compared with (column AAOD or BC burden retreivals, the models are generally biased low. The average ratio of model to retrieved AAOD is less than 0.7 in South American and 0.6 in African biomass burning regions; both of these regions lack surface concentration measurements. In Asia the average model to observed ratio is 0.7 for AAOD and 0.5 for BC surface concentrations. Compared with aircraft measurements over the Americas at latitudes between 0 and 50N, the average model is a factor of 8 larger than observed, and most models exceed the measured BC standard deviation in the mid to upper troposphere. At higher latitudes the average model to aircraft BC ratio is 0.4 and models underestimate the observed BC loading in the lower and middle troposphere associated with springtime Arctic haze. Low model bias for AAOD but overestimation of surface and upper atmospheric BC concentrations at lower latitudes suggests that most models are underestimating BC absorption and should improve estimates for refractive index, particle size, and optical effects of BC coating. Retrieval uncertainties and/or differences with model diagnostic treatment may also contribute to the model-measurement disparity. Largest AeroCom model diversity occurred in northern Eurasia and the remote Arctic, regions influenced by anthropogenic sources. Changing emissions, aging, removal, or optical properties within a single model

  12. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    Science.gov (United States)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  13. The problem of multicollinearity in horizontal solar radiation estimation models and a new model for Turkey

    International Nuclear Information System (INIS)

    Demirhan, Haydar

    2014-01-01

    Highlights: • Impacts of multicollinearity on solar radiation estimation models are discussed. • Accuracy of existing empirical models for Turkey is evaluated. • A new non-linear model for the estimation of average daily horizontal global solar radiation is proposed. • Estimation and prediction performance of the proposed and existing models are compared. - Abstract: Due to the considerable decrease in energy resources and increasing energy demand, solar energy is an appealing field of investment and research. There are various modelling strategies and particular models for the estimation of the amount of solar radiation reaching at a particular point over the Earth. In this article, global solar radiation estimation models are taken into account. To emphasize severity of multicollinearity problem in solar radiation estimation models, some of the models developed for Turkey are revisited. It is observed that these models have been identified as accurate under certain multicollinearity structures, and when the multicollinearity is eliminated, the accuracy of these models is controversial. Thus, a reliable model that does not suffer from multicollinearity and gives precise estimates of global solar radiation for the whole region of Turkey is necessary. A new nonlinear model for the estimation of average daily horizontal solar radiation is proposed making use of the genetic programming technique. There is no multicollinearity problem in the new model, and its estimation accuracy is better than the revisited models in terms of numerous statistical performance measures. According to the proposed model, temperature, precipitation, altitude, longitude, and monthly average daily extraterrestrial horizontal solar radiation have significant effect on the average daily global horizontal solar radiation. Relative humidity and soil temperature are not included in the model due to their high correlation with precipitation and temperature, respectively. While altitude has

  14. Modeling Results For the ITER Cryogenic Fore Pump. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Pfotenhauer, John M. [University of Wisconsin, Madison, WI (United States); Zhang, Dongsheng [University of Wisconsin, Madison, WI (United States)

    2014-03-31

    A numerical model characterizing the operation of a cryogenic fore-pump (CFP) for ITER has been developed at the University of Wisconsin – Madison during the period from March 15, 2011 through June 30, 2014. The purpose of the ITER-CFP is to separate hydrogen isotopes from helium gas, both making up the exhaust components from the ITER reactor. The model explicitly determines the amount of hydrogen that is captured by the supercritical-helium-cooled pump as a function of the inlet temperature of the supercritical helium, its flow rate, and the inlet conditions of the hydrogen gas flow. Furthermore the model computes the location and amount of hydrogen captured in the pump as a function of time. Throughout the model’s development, and as a calibration check for its results, it has been extensively compared with the measurements of a CFP prototype tested at Oak Ridge National Lab. The results of the model demonstrate that the quantity of captured hydrogen is very sensitive to the inlet temperature of the helium coolant on the outside of the cryopump. Furthermore, the model can be utilized to refine those tests, and suggests methods that could be incorporated in the testing to enhance the usefulness of the measured data.

  15. A new emergency response model for MACCS. Final report

    International Nuclear Information System (INIS)

    Chanin, D.I.

    1992-01-01

    Under DOE sponsorship, as directed by the Los Alamos National Laboratory (LANL), the MACCS code (version 1.5.11.1) [Ch92] was modified to implement a series of improvements in its modeling of emergency response actions. The purpose of this effort has been to aid the Westinghouse Savannah River Company (WSRC) in its performance of the Level III analysis for the Savannah River Site (SRS) probabilistic risk analysis (PRA) of K Reactor [Wo90]. To ensure its usefulness to WSRC, and facilitate the new model's eventual merger with other MACCS enhancements, close cooperation with WSRC and the MACCS development team at Sandia National Laboratories (SNL) was maintained throughout the project. These improvements are intended to allow a greater degree of flexibility in modeling the mitigative actions of evacuation and sheltering. The emergency response model in MACCS version 1.5.11.1 was developed to support NRC analyses of consequences from severe accidents at commercial nuclear power plants. The NRC code imposes unnecessary constraints on DOE safety analyses, particularly for consequences to onsite worker populations, and it has therefore been revamped. The changes to the code have been implemented in a manner that preserves previous modeling capabilities and therefore prior analyses can be repeated with the new code

  16. Parameter estimation in nonlinear models for pesticide degradation

    International Nuclear Information System (INIS)

    Richter, O.; Pestemer, W.; Bunte, D.; Diekkrueger, B.

    1991-01-01

    A wide class of environmental transfer models is formulated as ordinary or partial differential equations. With the availability of fast computers, the numerical solution of large systems became feasible. The main difficulty in performing a realistic and convincing simulation of the fate of a substance in the biosphere is not the implementation of numerical techniques but rather the incomplete data basis for parameter estimation. Parameter estimation is a synonym for statistical and numerical procedures to derive reasonable numerical values for model parameters from data. The classical method is the familiar linear regression technique which dates back to the 18th century. Because it is easy to handle, linear regression has long been established as a convenient tool for analysing relationships. However, the wide use of linear regression has led to an overemphasis of linear relationships. In nature, most relationships are nonlinear and linearization often gives a poor approximation of reality. Furthermore, pure regression models are not capable to map the dynamics of a process. Therefore, realistic models involve the evolution in time (and space). This leads in a natural way to the formulation of differential equations. To establish the link between data and dynamical models, numerical advanced parameter identification methods have been developed in recent years. This paper demonstrates the application of these techniques to estimation problems in the field of pesticide dynamics. (7 refs., 5 figs., 2 tabs.)

  17. Contributions in Radio Channel Sounding, Modeling, and Estimation

    DEFF Research Database (Denmark)

    Pedersen, Troels

    2009-01-01

    This thesis spans over three strongly related topics in wireless communication: channel-sounding, -modeling, and -estimation. Three main problems are addressed: optimization of spatio-temporal apertures for channel sounding; estimation of per-path power spectral densities (psds); and modeling...... relies on a ``propagation graph'' where vertices  represent scatterers and edges represent the wave propagation conditions between scatterers.  The graph has a recursive structure, which permits modeling of the transfer function of the graph. We derive a closed-form expression of the infinite......-bounce impulse response. This expression is used for simulation of the impulse response of randomly generated propagation graphs. The obtained realizations exhibit the well-observed  exponential power decay versus delay and specular-to-diffuse transition....

  18. Estimation Parameters And Modelling Zero Inflated Negative Binomial

    Directory of Open Access Journals (Sweden)

    Cindy Cahyaning Astuti

    2016-11-01

    Full Text Available Regression analysis is used to determine relationship between one or several response variable (Y with one or several predictor variables (X. Regression model between predictor variables and the Poisson distributed response variable is called Poisson Regression Model. Since, Poisson Regression requires an equality between mean and variance, it is not appropriate to apply this model on overdispersion (variance is higher than mean. Poisson regression model is commonly used to analyze the count data. On the count data type, it is often to encounteredd some observations that have zero value with large proportion of zero value on the response variable (zero Inflation. Poisson regression can be used to analyze count data but it has not been able to solve problem of excess zero value on the response variable. An alternative model which is more suitable for overdispersion data and can solve the problem of excess zero value on the response variable is Zero Inflated Negative Binomial (ZINB. In this research, ZINB is applied on the case of Tetanus Neonatorum in East Java. The aim of this research is to examine the likelihood function and to form an algorithm to estimate the parameter of ZINB and also applying ZINB model in the case of Tetanus Neonatorum in East Java. Maximum Likelihood Estimation (MLE method is used to estimate the parameter on ZINB and the likelihood function is maximized using Expectation Maximization (EM algorithm. Test results of ZINB regression model showed that the predictor variable have a partial significant effect at negative binomial model is the percentage of pregnant women visits and the percentage of maternal health personnel assisted, while the predictor variables that have a partial significant effect at zero inflation model is the percentage of neonatus visits.

  19. Negative binomial models for abundance estimation of multiple closed populations

    Science.gov (United States)

    Boyce, Mark S.; MacKenzie, Darry I.; Manly, Bryan F.J.; Haroldson, Mark A.; Moody, David W.

    2001-01-01

    Counts of uniquely identified individuals in a population offer opportunities to estimate abundance. However, for various reasons such counts may be burdened by heterogeneity in the probability of being detected. Theoretical arguments and empirical evidence demonstrate that the negative binomial distribution (NBD) is a useful characterization for counts from biological populations with heterogeneity. We propose a method that focuses on estimating multiple populations by simultaneously using a suite of models derived from the NBD. We used this approach to estimate the number of female grizzly bears (Ursus arctos) with cubs-of-the-year in the Yellowstone ecosystem, for each year, 1986-1998. Akaike's Information Criteria (AIC) indicated that a negative binomial model with a constant level of heterogeneity across all years was best for characterizing the sighting frequencies of female grizzly bears. A lack-of-fit test indicated the model adequately described the collected data. Bootstrap techniques were used to estimate standard errors and 95% confidence intervals. We provide a Monte Carlo technique, which confirms that the Yellowstone ecosystem grizzly bear population increased during the period 1986-1998.

  20. A Bayesian Markov geostatistical model for estimation of hydrogeological properties

    International Nuclear Information System (INIS)

    Rosen, L.; Gustafson, G.

    1996-01-01

    A geostatistical methodology based on Markov-chain analysis and Bayesian statistics was developed for probability estimations of hydrogeological and geological properties in the siting process of a nuclear waste repository. The probability estimates have practical use in decision-making on issues such as siting, investigation programs, and construction design. The methodology is nonparametric which makes it possible to handle information that does not exhibit standard statistical distributions, as is often the case for classified information. Data do not need to meet the requirements on additivity and normality as with the geostatistical methods based on regionalized variable theory, e.g., kriging. The methodology also has a formal way for incorporating professional judgments through the use of Bayesian statistics, which allows for updating of prior estimates to posterior probabilities each time new information becomes available. A Bayesian Markov Geostatistical Model (BayMar) software was developed for implementation of the methodology in two and three dimensions. This paper gives (1) a theoretical description of the Bayesian Markov Geostatistical Model; (2) a short description of the BayMar software; and (3) an example of application of the model for estimating the suitability for repository establishment with respect to the three parameters of lithology, hydraulic conductivity, and rock quality designation index (RQD) at 400--500 meters below ground surface in an area around the Aespoe Hard Rock Laboratory in southeastern Sweden

  1. Application of Parameter Estimation for Diffusions and Mixture Models

    DEFF Research Database (Denmark)

    Nolsøe, Kim

    The first part of this thesis proposes a method to determine the preferred number of structures, their proportions and the corresponding geometrical shapes of an m-membered ring molecule. This is obtained by formulating a statistical model for the data and constructing an algorithm which samples...... with the posterior score function. From an application point of view this methology is easy to apply, since the optimal estimating function G(;Xt1 ; : : : ;Xtn ) is equal to the classical optimal estimating function, plus a correction term which takes into account the prior information. The methology is particularly...

  2. Model of the final borehole geometry for helical laser drilling

    Science.gov (United States)

    Kroschel, Alexander; Michalowski, Andreas; Graf, Thomas

    2018-05-01

    A model for predicting the borehole geometry for laser drilling is presented based on the calculation of a surface of constant absorbed fluence. It is applicable to helical drilling of through-holes with ultrashort laser pulses. The threshold fluence describing the borehole surface is fitted for best agreement with experimental data in the form of cross-sections of through-holes of different shapes and sizes in stainless steel samples. The fitted value is similar to ablation threshold fluence values reported for laser ablation models.

  3. Models for genotype by environment interaction estimation on halomorphic soil

    Directory of Open Access Journals (Sweden)

    Dimitrijević Miodrag

    2006-01-01

    Full Text Available In genotype by environment interaction estimation, as well as, in total trial variability anal­ysis several models are in use. The most often used are Analysis of variance, Eberhart and Russell model and AMMI model. Each of the models has its own specificities, in the way of sources of varia­tion comprehension and treatment. It is known that agriculturally less productive environments increase errors, dimmish reaction differences between genotypes and decrease repeatability of conditions during years. A sample consisting on six bread wheat varieties was studied in three veg­etation periods on halomorphic soil, solonetz type in Banat (vil. Kumane. Genotype by environ­ment interaction was quantified using ANOVA, Eberhart and Russell model and AMMI model. The results were compared not only on pure solonetz soil (control, but also on two level of ameliora­tion (25 and 50t/ha phosphor-gypsum.

  4. ANFIS-Based Modeling for Photovoltaic Characteristics Estimation

    Directory of Open Access Journals (Sweden)

    Ziqiang Bi

    2016-09-01

    Full Text Available Due to the high cost of photovoltaic (PV modules, an accurate performance estimation method is significantly valuable for studying the electrical characteristics of PV generation systems. Conventional analytical PV models are usually composed by nonlinear exponential functions and a good number of unknown parameters must be identified before using. In this paper, an adaptive-network-based fuzzy inference system (ANFIS based modeling method is proposed to predict the current-voltage characteristics of PV modules. The effectiveness of the proposed modeling method is evaluated through comparison with Villalva’s model, radial basis function neural networks (RBFNN based model and support vector regression (SVR based model. Simulation and experimental results confirm both the feasibility and the effectiveness of the proposed method.

  5. Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation.

    Science.gov (United States)

    Ribba, B; Grimm, H P; Agoram, B; Davies, M R; Gadkar, K; Niederer, S; van Riel, N; Timmis, J; van der Graaf, P H

    2017-08-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early Development to focus discussions on two critical methodological aspects of QSP model development: optimal structural granularity and parameter estimation. We here report in a perspective article a summary of presentations and discussions. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  6. Estimation of stature from sternum - Exploring the quadratic models.

    Science.gov (United States)

    Saraf, Ashish; Kanchan, Tanuj; Krishan, Kewal; Ateriya, Navneet; Setia, Puneet

    2018-04-14

    Identification of the dead is significant in examination of unknown, decomposed and mutilated human remains. Establishing the biological profile is the central issue in such a scenario, and stature estimation remains one of the important criteria in this regard. The present study was undertaken to estimate stature from different parts of the sternum. A sample of 100 sterna was obtained from individuals during the medicolegal autopsies. Length of the deceased and various measurements of the sternum were measured. Student's t-test was performed to find the sex differences in stature and sternal measurements included in the study. Correlation between stature and sternal measurements were analysed using Karl Pearson's correlation, and linear and quadratic regression models were derived. All the measurements were found to be significantly larger in males than females. Stature correlated best with the combined length of sternum, among males (R = 0.894), females (R = 0.859), and for the total sample (R = 0.891). The study showed that the models derived for stature estimation from combined length of sternum are likely to give the most accurate estimates of stature in forensic case work when compared to manubrium and mesosternum. Accuracy of stature estimation further increased with quadratic models derived for the mesosternum among males and combined length of sternum among males and females when compared to linear regression models. Future studies in different geographical locations and a larger sample size are proposed to confirm the study observations. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  7. Efforts - Final technical report on task 4. Physical modelling calidation

    DEFF Research Database (Denmark)

    Andreasen, Jan Lasson; Olsson, David Dam; Christensen, T. W.

    The present report is documentation for the work carried out in Task 4 at DTU Physical modelling-validation on the Brite/Euram project No. BE96-3340, contract No. BRPR-CT97-0398, with the title Enhanced Framework for forging design using reliable three-dimensional simulation (EFFORTS). The report...

  8. A multi-timescale estimator for battery state of charge and capacity dual estimation based on an online identified model

    International Nuclear Information System (INIS)

    Wei, Zhongbao; Zhao, Jiyun; Ji, Dongxu; Tseng, King Jet

    2017-01-01

    Highlights: •SOC and capacity are dually estimated with online adapted battery model. •Model identification and state dual estimate are fully decoupled. •Multiple timescales are used to improve estimation accuracy and stability. •The proposed method is verified with lab-scale experiments. •The proposed method is applicable to different battery chemistries. -- Abstract: Reliable online estimation of state of charge (SOC) and capacity is critically important for the battery management system (BMS). This paper presents a multi-timescale method for dual estimation of SOC and capacity with an online identified battery model. The model parameter estimator and the dual estimator are fully decoupled and executed with different timescales to improve the model accuracy and stability. Specifically, the model parameters are online adapted with the vector-type recursive least squares (VRLS) to address the different variation rates of them. Based on the online adapted battery model, the Kalman filter (KF)-based SOC estimator and RLS-based capacity estimator are formulated and integrated in the form of dual estimation. Experimental results suggest that the proposed method estimates the model parameters, SOC, and capacity in real time with fast convergence and high accuracy. Experiments on both lithium-ion battery and vanadium redox flow battery (VRB) verify the generality of the proposed method on multiple battery chemistries. The proposed method is also compared with other existing methods on the computational cost to reveal its superiority for practical application.

  9. A state-space model for estimating detailed movements and home range from acoustic receiver data

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Weng, Kevin

    2013-01-01

    We present a state-space model for acoustic receiver data to estimate detailed movement and home range of individual fish while accounting for spatial bias. An integral part of the approach is the detection function, which models the probability of logging tag transmissions as a function of dista......We present a state-space model for acoustic receiver data to estimate detailed movement and home range of individual fish while accounting for spatial bias. An integral part of the approach is the detection function, which models the probability of logging tag transmissions as a function...... that the location error scales log-linearly with detection range and movement speed. This result can be used as guideline for designing network layout when species movement capacity and acoustic environment are known or can be estimated prior to network deployment. Finally, as an example, the state-space model...... is used to estimate home range and movement of a reef fish in the Pacific Ocean....

  10. Remaining lifetime modeling using State-of-Health estimation

    Science.gov (United States)

    Beganovic, Nejra; Söffker, Dirk

    2017-08-01

    Technical systems and system's components undergo gradual degradation over time. Continuous degradation occurred in system is reflected in decreased system's reliability and unavoidably lead to a system failure. Therefore, continuous evaluation of State-of-Health (SoH) is inevitable to provide at least predefined lifetime of the system defined by manufacturer, or even better, to extend the lifetime given by manufacturer. However, precondition for lifetime extension is accurate estimation of SoH as well as the estimation and prediction of Remaining Useful Lifetime (RUL). For this purpose, lifetime models describing the relation between system/component degradation and consumed lifetime have to be established. In this contribution modeling and selection of suitable lifetime models from database based on current SoH conditions are discussed. Main contribution of this paper is the development of new modeling strategies capable to describe complex relations between measurable system variables, related system degradation, and RUL. Two approaches with accompanying advantages and disadvantages are introduced and compared. Both approaches are capable to model stochastic aging processes of a system by simultaneous adaption of RUL models to current SoH. The first approach requires a priori knowledge about aging processes in the system and accurate estimation of SoH. An estimation of SoH here is conditioned by tracking actual accumulated damage into the system, so that particular model parameters are defined according to a priori known assumptions about system's aging. Prediction accuracy in this case is highly dependent on accurate estimation of SoH but includes high number of degrees of freedom. The second approach in this contribution does not require a priori knowledge about system's aging as particular model parameters are defined in accordance to multi-objective optimization procedure. Prediction accuracy of this model does not highly depend on estimated SoH. This model

  11. Nonparametric Estimation of Distributions in Random Effects Models

    KAUST Repository

    Hart, Jeffrey D.

    2011-01-01

    We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.

  12. Model Year 2015 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2014-12-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  13. Model Year 2009 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2008-10-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  14. Model Year 2005 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2004-11-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  15. Is the Rational Addiction model inherently impossible to estimate?

    Science.gov (United States)

    Laporte, Audrey; Dass, Adrian Rohit; Ferguson, Brian S

    2017-07-01

    The Rational Addiction (RA) model is increasingly often estimated using individual level panel data with mixed results; in particular, with regard to the implied rate of time discount. This paper suggests that the odd values of the rate of discount frequently found in the literature may in fact be a consequence of the saddle-point dynamics associated with individual level inter-temporal optimization problems. We report the results of Monte Carlo experiments estimating RA-type difference equations that seem to suggest the possibility that the presence of both a stable and an unstable root in the dynamic process may create serious problems for the estimation of RA equations. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Model Year 2016 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2015-11-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  17. Model Year 2010 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2009-10-14

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  18. Model Year 2014 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2013-12-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  19. Final Report: Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [William Marsh Rice University

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  20. Computational modeling of drug-resistant bacteria. Final report

    International Nuclear Information System (INIS)

    2015-01-01

    Initial proposal summary: The evolution of antibiotic-resistant mutants among bacteria (superbugs) is a persistent and growing threat to public health. In many ways, we are engaged in a war with these microorganisms, where the corresponding arms race involves chemical weapons and biological targets. Just as advances in microelectronics, imaging technology and feature recognition software have turned conventional munitions into smart bombs, the long-term objectives of this proposal are to develop highly effective antibiotics using next-generation biomolecular modeling capabilities in tandem with novel subatomic feature detection software. Using model compounds and targets, our design methodology will be validated with correspondingly ultra-high resolution structure-determination methods at premier DOE facilities (single-crystal X-ray diffraction at Argonne National Laboratory, and neutron diffraction at Oak Ridge National Laboratory). The objectives and accomplishments are summarized.

  1. Computational modeling of drug-resistant bacteria. Final report

    Energy Technology Data Exchange (ETDEWEB)

    MacDougall, Preston [Middle Tennessee State Univ., Murfreesboro, TN (United States)

    2015-03-12

    Initial proposal summary: The evolution of antibiotic-resistant mutants among bacteria (superbugs) is a persistent and growing threat to public health. In many ways, we are engaged in a war with these microorganisms, where the corresponding arms race involves chemical weapons and biological targets. Just as advances in microelectronics, imaging technology and feature recognition software have turned conventional munitions into smart bombs, the long-term objectives of this proposal are to develop highly effective antibiotics using next-generation biomolecular modeling capabilities in tandem with novel subatomic feature detection software. Using model compounds and targets, our design methodology will be validated with correspondingly ultra-high resolution structure-determination methods at premier DOE facilities (single-crystal X-ray diffraction at Argonne National Laboratory, and neutron diffraction at Oak Ridge National Laboratory). The objectives and accomplishments are summarized.

  2. Theory, Modeling and Simulation Annual Report 2000; FINAL

    International Nuclear Information System (INIS)

    Dixon, David A; Garrett, Bruce C; Straatsma, TP; Jones, Donald R; Studham, Scott; Harrison, Robert J; Nichols, Jeffrey A

    2001-01-01

    This annual report describes the 2000 research accomplishments for the Theory, Modeling, and Simulation (TM and S) directorate, one of the six research organizations in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) at Pacific Northwest National Laboratory (PNNL). EMSL is a U.S. Department of Energy (DOE) national scientific user facility and is the centerpiece of the DOE commitment to providing world-class experimental, theoretical, and computational capabilities for solving the nation's environmental problems

  3. Experimental Benchmarking of Fire Modeling Simulations. Final Report

    International Nuclear Information System (INIS)

    Greiner, Miles; Lopez, Carlos

    2003-01-01

    A series of large-scale fire tests were performed at Sandia National Laboratories to simulate a nuclear waste transport package under severe accident conditions. The test data were used to benchmark and adjust the Container Analysis Fire Environment (CAFE) computer code. CAFE is a computational fluid dynamics fire model that accurately calculates the heat transfer from a large fire to a massive engulfed transport package. CAFE will be used in transport package design studies and risk analyses

  4. Urban scale air quality modelling using detailed traffic emissions estimates

    Science.gov (United States)

    Borrego, C.; Amorim, J. H.; Tchepel, O.; Dias, D.; Rafael, S.; Sá, E.; Pimentel, C.; Fontes, T.; Fernandes, P.; Pereira, S. R.; Bandeira, J. M.; Coelho, M. C.

    2016-04-01

    The atmospheric dispersion of NOx and PM10 was simulated with a second generation Gaussian model over a medium-size south-European city. Microscopic traffic models calibrated with GPS data were used to derive typical driving cycles for each road link, while instantaneous emissions were estimated applying a combined Vehicle Specific Power/Co-operative Programme for Monitoring and Evaluation of the Long-range Transmission of Air Pollutants in Europe (VSP/EMEP) methodology. Site-specific background concentrations were estimated using time series analysis and a low-pass filter applied to local observations. Air quality modelling results are compared against measurements at two locations for a 1 week period. 78% of the results are within a factor of two of the observations for 1-h average concentrations, increasing to 94% for daily averages. Correlation significantly improves when background is added, with an average of 0.89 for the 24 h record. The results highlight the potential of detailed traffic and instantaneous exhaust emissions estimates, together with filtered urban background, to provide accurate input data to Gaussian models applied at the urban scale.

  5. Mathematical modeling of the voloxidation process. Final report

    International Nuclear Information System (INIS)

    Stanford, T.G.

    1979-06-01

    A mathematical model of the voloxidation process, a head-end reprocessing step for the removal of volatile fission products from spent nuclear fuel, has been developed. Three types of voloxidizer operation have been considered; co-current operation in which the gas and solid streams flow in the same direction, countercurrent operation in which the gas and solid streams flow in opposite directions, and semi-batch operation in which the gas stream passes through the reactor while the solids remain in it and are processed batch wise. Because of the complexity of the physical ahd chemical processes which occur during the voloxidation process and the lack of currently available kinetic data, a global kinetic model has been adapted for this study. Test cases for each mode of operation have been simulated using representative values of the model parameters. To process 714 kgm/day of spent nuclear fuel, using an oxidizing atmosphere containing 20 mole percent oxygen, it was found that a reactor 0.7 m in diameter and 2.49 m in length would be required for both cocurrent and countercurrent modes of operation while for semibatch operation a 0.3 m 3 reactor and an 88200 sec batch processing time would be required

  6. A method for model identification and parameter estimation

    International Nuclear Information System (INIS)

    Bambach, M; Heinkenschloss, M; Herty, M

    2013-01-01

    We propose and analyze a new method for the identification of a parameter-dependent model that best describes a given system. This problem arises, for example, in the mathematical modeling of material behavior where several competing constitutive equations are available to describe a given material. In this case, the models are differential equations that arise from the different constitutive equations, and the unknown parameters are coefficients in the constitutive equations. One has to determine the best-suited constitutive equations for a given material and application from experiments. We assume that the true model is one of the N possible parameter-dependent models. To identify the correct model and the corresponding parameters, we can perform experiments, where for each experiment we prescribe an input to the system and observe a part of the system state. Our approach consists of two stages. In the first stage, for each pair of models we determine the experiment, i.e. system input and observation, that best differentiates between the two models, and measure the distance between the two models. Then we conduct N(N − 1) or, depending on the approach taken, N(N − 1)/2 experiments and use the result of the experiments as well as the previously computed model distances to determine the true model. We provide sufficient conditions on the model distances and measurement errors which guarantee that our approach identifies the correct model. Given the model, we identify the corresponding model parameters in the second stage. The problem in the second stage is a standard parameter estimation problem and we use a method suitable for the given application. We illustrate our approach on three examples, including one where the models are elliptic partial differential equations with different parameterized right-hand sides and an example where we identify the constitutive equation in a problem from computational viscoplasticity. (paper)

  7. Evaporation estimation of rift valley lakes: comparison of models.

    Science.gov (United States)

    Melesse, Assefa M; Abtew, Wossenu; Dessalegne, Tibebe

    2009-01-01

    Evapotranspiration (ET) accounts for a substantial amount of the water flux in the arid and semi-arid regions of the World. Accurate estimation of ET has been a challenge for hydrologists, mainly because of the spatiotemporal variability of the environmental and physical parameters governing the latent heat flux. In addition, most available ET models depend on intensive meteorological information for ET estimation. Such data are not available at the desired spatial and temporal scales in less developed and remote parts of the world. This limitation has necessitated the development of simple models that are less data intensive and provide ET estimates with acceptable level of accuracy. Remote sensing approach can also be applied to large areas where meteorological data are not available and field scale data collection is costly, time consuming and difficult. In areas like the Rift Valley regions of Ethiopia, the applicability of the Simple Method (Abtew Method) of lake evaporation estimation and surface energy balance approach using remote sensing was studied. The Simple Method and a remote sensing-based lake evaporation estimates were compared to the Penman, Energy balance, Pan, Radiation and Complementary Relationship Lake Evaporation (CRLE) methods applied in the region. Results indicate a good correspondence of the models outputs to that of the above methods. Comparison of the 1986 and 2000 monthly lake ET from the Landsat images to the Simple and Penman Methods show that the remote sensing and surface energy balance approach is promising for large scale applications to understand the spatial variation of the latent heat flux.

  8. Evaporation Estimation of Rift Valley Lakes: Comparison of Models

    Directory of Open Access Journals (Sweden)

    Tibebe Dessalegne

    2009-12-01

    Full Text Available Evapotranspiration (ET accounts for a substantial amount of the water flux in the arid and semi-arid regions of the World. Accurate estimation of ET has been a challenge for hydrologists, mainly because of the spatiotemporal variability of the environmental and physical parameters governing the latent heat flux. In addition, most available ET models depend on intensive meteorological information for ET estimation. Such data are not available at the desired spatial and temporal scales in less developed and remote parts of the world. This limitation has necessitated the development of simple models that are less data intensive and provide ET estimates with acceptable level of accuracy. Remote sensing approach can also be applied to large areas where meteorological data are not available and field scale data collection is costly, time consuming and difficult. In areas like the Rift Valley regions of Ethiopia, the applicability of the Simple Method (Abtew Method of lake evaporation estimation and surface energy balance approach using remote sensing was studied. The Simple Method and a remote sensing-based lake evaporation estimates were compared to the Penman, Energy balance, Pan, Radiation and Complementary Relationship Lake Evaporation (CRLE methods applied in the region. Results indicate a good correspondence of the models outputs to that of the above methods. Comparison of the 1986 and 2000 monthly lake ET from the Landsat images to the Simple and Penman Methods show that the remote sensing and surface energy balance approach is promising for large scale applications to understand the spatial variation of the latent heat flux.

  9. Sparse estimation of model-based diffuse thermal dust emission

    Science.gov (United States)

    Irfan, Melis O.; Bobin, Jérôme

    2018-03-01

    Component separation for the Planck High Frequency Instrument (HFI) data is primarily concerned with the estimation of thermal dust emission, which requires the separation of thermal dust from the cosmic infrared background (CIB). For that purpose, current estimation methods rely on filtering techniques to decouple thermal dust emission from CIB anisotropies, which tend to yield a smooth, low-resolution, estimation of the dust emission. In this paper, we present a new parameter estimation method, premise: Parameter Recovery Exploiting Model Informed Sparse Estimates. This method exploits the sparse nature of thermal dust emission to calculate all-sky maps of thermal dust temperature, spectral index, and optical depth at 353 GHz. premise is evaluated and validated on full-sky simulated data. We find the percentage difference between the premise results and the true values to be 2.8, 5.7, and 7.2 per cent at the 1σ level across the full sky for thermal dust temperature, spectral index, and optical depth at 353 GHz, respectively. A comparison between premise and a GNILC-like method over selected regions of our sky simulation reveals that both methods perform comparably within high signal-to-noise regions. However, outside of the Galactic plane, premise is seen to outperform the GNILC-like method with increasing success as the signal-to-noise ratio worsens.

  10. A statistical methodology for quantification of uncertainty in best estimate code physical models

    International Nuclear Information System (INIS)

    Vinai, Paolo; Macian-Juan, Rafael; Chawla, Rakesh

    2007-01-01

    A novel uncertainty assessment methodology, based on a statistical non-parametric approach, is presented in this paper. It achieves quantification of code physical model uncertainty by making use of model performance information obtained from studies of appropriate separate-effect tests. Uncertainties are quantified in the form of estimated probability density functions (pdf's), calculated with a newly developed non-parametric estimator. The new estimator objectively predicts the probability distribution of the model's 'error' (its uncertainty) from databases reflecting the model's accuracy on the basis of available experiments. The methodology is completed by applying a novel multi-dimensional clustering technique based on the comparison of model error samples with the Kruskall-Wallis test. This takes into account the fact that a model's uncertainty depends on system conditions, since a best estimate code can give predictions for which the accuracy is affected by the regions of the physical space in which the experiments occur. The final result is an objective, rigorous and accurate manner of assigning uncertainty to coded models, i.e. the input information needed by code uncertainty propagation methodologies used for assessing the accuracy of best estimate codes in nuclear systems analysis. The new methodology has been applied to the quantification of the uncertainty in the RETRAN-3D void model and then used in the analysis of an independent separate-effect experiment. This has clearly demonstrated the basic feasibility of the approach, as well as its advantages in yielding narrower uncertainty bands in quantifying the code's accuracy for void fraction predictions

  11. Model for Estimation of Fuel Consumption of Cruise Ships

    Directory of Open Access Journals (Sweden)

    Morten Simonsen

    2018-04-01

    Full Text Available This article presents a model to estimate the energy use and fuel consumption of cruise ships that sail Norwegian waters. Automatic identification system (AIS data and technical information about cruise ships provided input to the model, including service speed, total power, and number of engines. The model was tested against real-world data obtained from a small cruise vessel and both a medium and large cruise ship. It is sensitive to speed and the corresponding engine load profile of the ship. A crucial determinate for total fuel consumption is also associated with hotel functions, which can make a large contribution to the overall energy use of cruise ships. Real-world data fits the model best when ship speed is 70–75% of service speed. With decreased or increased speed, the model tends to diverge from real-world observations. The model gives a proxy for calculation of fuel consumption associated with cruise ships that sail to Norwegian waters and can be used to estimate greenhouse gas emissions and to evaluate energy reduction strategies for cruise ships.

  12. Performance of monitoring networks estimated from a Gaussian plume model

    International Nuclear Information System (INIS)

    Seebregts, A.J.; Hienen, J.F.A.

    1990-10-01

    In support of the ECN study on monitoring strategies after nuclear accidents, the present report describes the analysis of the performance of a monitoring network in a square grid. This network is used to estimate the distribution of the deposition pattern after a release of radioactivity into the atmosphere. The analysis is based upon a single release, a constant wind direction and an atmospheric dispersion according to a simplified Gaussian plume model. A technique is introduced to estimate the parameters in this Gaussian model based upon measurements at specific monitoring locations and linear regression, although this model is intrinsically non-linear. With these estimated parameters and the Gaussian model the distribution of the contamination due to deposition can be estimated. To investigate the relation between the network and the accuracy of the estimates for the deposition, deposition data have been generated by the Gaussian model, including a measurement error by a Monte Carlo simulation and this procedure has been repeated for several grid sizes, dispersion conditions, number of measurements per location, and errors per single measurement. The present technique has also been applied for the mesh sizes of two networks in the Netherlands, viz. the Landelijk Meetnet Radioaciviteit (National Measurement Network on Radioactivity, mesh size approx. 35 km) and the proposed Landelijk Meetnet Nucleaire Incidenten (National Measurement Network on Nuclear Incidents, mesh size approx. 15 km). The results show accuracies of 11 and 7 percent, respectively, if monitoring locations are used more than 10 km away from the postulated accident site. These figures are based upon 3 measurements per location and a dispersion during neutral weather with a wind velocity of 4 m/s. For stable weather conditions and low wind velocities, i.e. a small plume, the calculated accuracies are at least a factor 1.5 worse.The present type of analysis makes a cost-benefit approach to the

  13. A consensus approach for estimating the predictive accuracy of dynamic models in biology.

    Science.gov (United States)

    Villaverde, Alejandro F; Bongard, Sophia; Mauch, Klaus; Müller, Dirk; Balsa-Canto, Eva; Schmid, Joachim; Banga, Julio R

    2015-04-01

    Mathematical models that predict the complex dynamic behaviour of cellular networks are fundamental in systems biology, and provide an important basis for biomedical and biotechnological applications. However, obtaining reliable predictions from large-scale dynamic models is commonly a challenging task due to lack of identifiability. The present work addresses this challenge by presenting a methodology for obtaining high-confidence predictions from dynamic models using time-series data. First, to preserve the complex behaviour of the network while reducing the number of estimated parameters, model parameters are combined in sets of meta-parameters, which are obtained from correlations between biochemical reaction rates and between concentrations of the chemical species. Next, an ensemble of models with different parameterizations is constructed and calibrated. Finally, the ensemble is used for assessing the reliability of model predictions by defining a measure of convergence of model outputs (consensus) that is used as an indicator of confidence. We report results of computational tests carried out on a metabolic model of Chinese Hamster Ovary (CHO) cells, which are used for recombinant protein production. Using noisy simulated data, we find that the aggregated ensemble predictions are on average more accurate than the predictions of individual ensemble models. Furthermore, ensemble predictions with high consensus are statistically more accurate than ensemble predictions with large variance. The procedure provides quantitative estimates of the confidence in model predictions and enables the analysis of sufficiently complex networks as required for practical applications. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Parametric estimation of covariance function in Gaussian-process based Kriging models. Application to uncertainty quantification for computer experiments

    International Nuclear Information System (INIS)

    Bachoc, F.

    2013-01-01

    The parametric estimation of the covariance function of a Gaussian process is studied, in the framework of the Kriging model. Maximum Likelihood and Cross Validation estimators are considered. The correctly specified case, in which the covariance function of the Gaussian process does belong to the parametric set used for estimation, is first studied in an increasing-domain asymptotic framework. The sampling considered is a randomly perturbed multidimensional regular grid. Consistency and asymptotic normality are proved for the two estimators. It is then put into evidence that strong perturbations of the regular grid are always beneficial to Maximum Likelihood estimation. The incorrectly specified case, in which the covariance function of the Gaussian process does not belong to the parametric set used for estimation, is then studied. It is shown that Cross Validation is more robust than Maximum Likelihood in this case. Finally, two applications of the Kriging model with Gaussian processes are carried out on industrial data. For a validation problem of the friction model of the thermal-hydraulic code FLICA 4, where experimental results are available, it is shown that Gaussian process modeling of the FLICA 4 code model error enables to considerably improve its predictions. Finally, for a meta modeling problem of the GERMINAL thermal-mechanical code, the interest of the Kriging model with Gaussian processes, compared to neural network methods, is shown. (author) [fr

  15. Modelling, Estimation and Control of Networked Complex Systems

    CERN Document Server

    Chiuso, Alessandro; Frasca, Mattia; Rizzo, Alessandro; Schenato, Luca; Zampieri, Sandro

    2009-01-01

    The paradigm of complexity is pervading both science and engineering, leading to the emergence of novel approaches oriented at the development of a systemic view of the phenomena under study; the definition of powerful tools for modelling, estimation, and control; and the cross-fertilization of different disciplines and approaches. This book is devoted to networked systems which are one of the most promising paradigms of complexity. It is demonstrated that complex, dynamical networks are powerful tools to model, estimate, and control many interesting phenomena, like agent coordination, synchronization, social and economics events, networks of critical infrastructures, resources allocation, information processing, or control over communication networks. Moreover, it is shown how the recent technological advances in wireless communication and decreasing in cost and size of electronic devices are promoting the appearance of large inexpensive interconnected systems, each with computational, sensing and mobile cap...

  16. Propagation channel characterization, parameter estimation, and modeling for wireless communications

    CERN Document Server

    Yin, Xuefeng

    2016-01-01

    Thoroughly covering channel characteristics and parameters, this book provides the knowledge needed to design various wireless systems, such as cellular communication systems, RFID and ad hoc wireless communication systems. It gives a detailed introduction to aspects of channels before presenting the novel estimation and modelling techniques which can be used to achieve accurate models. To systematically guide readers through the topic, the book is organised in three distinct parts. The first part covers the fundamentals of the characterization of propagation channels, including the conventional single-input single-output (SISO) propagation channel characterization as well as its extension to multiple-input multiple-output (MIMO) cases. Part two focuses on channel measurements and channel data post-processing. Wideband channel measurements are introduced, including the equipment, technology and advantages and disadvantages of different data acquisition schemes. The channel parameter estimation methods are ...

  17. Modelling and estimating degradation processes with application in structural reliability

    International Nuclear Information System (INIS)

    Chiquet, J.

    2007-06-01

    The characteristic level of degradation of a given structure is modeled through a stochastic process called the degradation process. The random evolution of the degradation process is governed by a differential system with Markovian environment. We put the associated reliability framework by considering the failure of the structure once the degradation process reaches a critical threshold. A closed form solution of the reliability function is obtained thanks to Markov renewal theory. Then, we build an estimation methodology for the parameters of the stochastic processes involved. The estimation methods and the theoretical results, as well as the associated numerical algorithms, are validated on simulated data sets. Our method is applied to the modelling of a real degradation mechanism, known as crack growth, for which an experimental data set is considered. (authors)

  18. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Directory of Open Access Journals (Sweden)

    Ian J Fiske

    Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high

  19. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Science.gov (United States)

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  20. Daily Discharge Estimation in Talar River Using Lazy Learning Model

    Directory of Open Access Journals (Sweden)

    Zahra Abdollahi

    2017-03-01

    Full Text Available Introduction: River discharge as one of the most important hydrology factors has a vital role in physical, ecological, social and economic processes. So, accurate and reliable prediction and estimation of river discharge have been widely considered by many researchers in different fields such as surface water management, design of hydraulic structures, flood control and ecological studies in spetialand temporal scale. Therefore, in last decades different techniques for short-term and long-term estimation of hourly, daily, monthly and annual discharge have been developed for many years. However, short-term estimation models are less sophisticated and more accurate.Various global and local algorithms have been widely used to estimate hydrologic variables. The current study effort to use Lazy Learning approach to evaluate the adequacy of input data in order to follow the variation of discharge and also simulate next-day discharge in Talar River in KasilianBasinwhere is located in north of Iran with an area of 66.75 km2. Lazy learning is a local linear modelling approach in which generalization beyond the training data is delayed until a query is made to the system, as opposed to in eager learning, where the system tries to generalize the training data before receiving queries Materials and Methods: The current study was conducted in Kasilian Basin, where is located in north of Iran with an area of 66.75 km2. The main river of this basin joins to Talar River near Valicbon village and then exit from the watershed. Hydrometric station located near Valicbon village is equipped with Parshall flume and Limnogragh which can record river discharge of about 20 cubic meters per second.In this study, daily data of discharge recorded in Valicbon station related to 2002 to 2012 was used to estimate the discharge of 19 September 2012. The mean annual discharge of considered river was also calculated by using available data about 0.441 cubic meters per second. To

  1. Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model

    DEFF Research Database (Denmark)

    Åberg, Andreas; Widd, Anders; Abildskov, Jens

    2016-01-01

    be used directly for accurate full-scale transient simulations. The model was validated against full-scale data with an engine following the European Transient Cycle. The validation showed that the predictive capability for nitrogen oxides (NOx) was satisfactory. After re-estimation of the adsorption...... and desorption parameters with full-scale transient data, the fit for both NOx and NH3-slip was satisfactory....

  2. Estimation and Inference for Very Large Linear Mixed Effects Models

    OpenAIRE

    Gao, K.; Owen, A. B.

    2016-01-01

    Linear mixed models with large imbalanced crossed random effects structures pose severe computational problems for maximum likelihood estimation and for Bayesian analysis. The costs can grow as fast as $N^{3/2}$ when there are N observations. Such problems arise in any setting where the underlying factors satisfy a many to many relationship (instead of a nested one) and in electronic commerce applications, the N can be quite large. Methods that do not account for the correlation structure can...

  3. MATHEMATICAL MODEL FOR ESTIMATION OF MECHANICAL SYSTEM CONDITION IN DYNAMICS

    Directory of Open Access Journals (Sweden)

    D. N. Mironov

    2011-01-01

    Full Text Available The paper considers an estimation of a complicated mechanical system condition in dynamics with due account of material degradation and accumulation of micro-damages. An element of continuous medium has been simulated and described with the help of a discrete element. The paper contains description of a model for determination of mechanical system longevity in accordance with number of cycles and operational period.

  4. Nonparametric Estimation of Regression Parameters in Measurement Error Models

    Czech Academy of Sciences Publication Activity Database

    Ehsanes Saleh, A.K.M.D.; Picek, J.; Kalina, Jan

    2009-01-01

    Roč. 67, č. 2 (2009), s. 177-200 ISSN 0026-1424 Grant - others:GA AV ČR(CZ) IAA101120801; GA MŠk(CZ) LC06024 Institutional research plan: CEZ:AV0Z10300504 Keywords : asymptotic relative efficiency(ARE) * asymptotic theory * emaculate mode * Me model * R-estimation * Reliabilty ratio(RR) Subject RIV: BB - Applied Statistics, Operational Research

  5. Leaf Area Estimation Models for Ginger ( Zingibere officinale Rosc ...

    African Journals Online (AJOL)

    The study was carried out to develop leaf area estimation models for three cultivars (37/79, 38/79 and 180/73) and four accessions (29/86, 30/86, 47/86 and 52/86) of ginger. Significant variations were observed among the tested genotypes in leaf length (L), leaf width (W) and actual leaf area (ALA). Leaf area was highly ...

  6. Estimation of Continuous Time Models in Economics: an Overview

    OpenAIRE

    Clifford R. Wymer

    2009-01-01

    The dynamics of economic behaviour is often developed in theory as a continuous time system. Rigorous estimation and testing of such systems, and the analysis of some aspects of their properties, is of particular importance in distinguishing between competing hypotheses and the resulting models. The consequences for the international economy during the past eighteen months of failures in the financial sector, and particularly the banking sector, make it essential that the dynamics of financia...

  7. Model parameters estimation and sensitivity by genetic algorithms

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca

    2003-01-01

    In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The

  8. Estimating the Multilevel Rasch Model: With the lme4 Package

    Directory of Open Access Journals (Sweden)

    Harold Doran

    2007-02-01

    Full Text Available Traditional Rasch estimation of the item and student parameters via marginal maximum likelihood, joint maximum likelihood or conditional maximum likelihood, assume individuals in clustered settings are uncorrelated and items within a test that share a grouping structure are also uncorrelated. These assumptions are often violated, particularly in educational testing situations, in which students are grouped into classrooms and many test items share a common grouping structure, such as a content strand or a reading passage. Consequently, one possible approach is to explicitly recognize the clustered nature of the data and directly incorporate random effects to account for the various dependencies. This article demonstrates how the multilevel Rasch model can be estimated using the functions in R for mixed-effects models with crossed or partially crossed random effects. We demonstrate how to model the following hierarchical data structures: a individuals clustered in similar settings (e.g., classrooms, schools, b items nested within a particular group (such as a content strand or a reading passage, and c how to estimate a teacher × content strand interaction.

  9. Internal combustion engines - Modelling, estimation and control issues

    Energy Technology Data Exchange (ETDEWEB)

    Vigild, C.W.

    2001-12-01

    Alternative power-trains have become buzz words in the automotive industry in the recent past. New technologies like Lithium-Ion batteries or fuel cells combined with high efficient electrical motors show promising results. However both technologies are extremely expensive and important questions like 'How are we going to supply fuel-cells with hydrogen in an environmentally friendly way?', 'How are we going to improve the range - and recharging speed - of electrical vehicles?' and 'How will our existing infrastructure cope with such changes?' are still left unanswered. Hence, the internal combustion engine with all its shortcomings is to stay with us for the next many years. What the future will really bring in this area is uncertain, but one thing can be said for sure; the time of the pipe in - pipe out engine concept is over. Modem engines, Diesel or gasoline, have in the recent past been provided with many new technologies to improve both performance and handling and to cope with the tightening emission legislations. However, as new devices are included, the number of control inputs is also gradually increased. Hence, the control matrix dimension has grown to a considerably size, and the typical table and regression based engine calibration procedures currently in use today contain both challenging and time-consuming tasks. One way to improve understanding of engines and provide a more comprehensive picture of the control problem is by use of simplified physical modelling - one of the main thrusts of this dissertation. The application of simplified physical modelling as a foundation for engine estimation and control design is first motivated by two control applications. The control problem concerns Air/Fuel ratio control of Spark Ignition engines. Two different ways of control are presented; one based on. a model based Extended Kalman Filter updated predictor, and one based on robust H {infinity} techniques. Both controllers are

  10. In-phase and quadrature imbalance modeling, estimation, and compensation

    CERN Document Server

    Li, Yabo

    2013-01-01

    This book provides a unified IQ imbalance model and systematically reviews the existing estimation and compensation schemes. It covers the different assumptions and approaches that lead to many models of IQ imbalance. In wireless communication systems, the In-phase and Quadrature (IQ) modulator and demodulator are usually used as transmitter (TX) and receiver (RX), respectively. For Digital-to-Analog Converter (DAC) and Analog-to-Digital Converter (ADC) limited systems, such as multi-giga-hertz bandwidth millimeter-wave systems, using analog modulator and demodulator is still a low power and l

  11. Estimating true evolutionary distances under the DCJ model.

    Science.gov (United States)

    Lin, Yu; Moret, Bernard M E

    2008-07-01

    Modern techniques can yield the ordering and strandedness of genes on each chromosome of a genome; such data already exists for hundreds of organisms. The evolutionary mechanisms through which the set of the genes of an organism is altered and reordered are of great interest to systematists, evolutionary biologists, comparative genomicists and biomedical researchers. Perhaps the most basic concept in this area is that of evolutionary distance between two genomes: under a given model of genomic evolution, how many events most likely took place to account for the difference between the two genomes? We present a method to estimate the true evolutionary distance between two genomes under the 'double-cut-and-join' (DCJ) model of genome rearrangement, a model under which a single multichromosomal operation accounts for all genomic rearrangement events: inversion, transposition, translocation, block interchange and chromosomal fusion and fission. Our method relies on a simple structural characterization of a genome pair and is both analytically and computationally tractable. We provide analytical results to describe the asymptotic behavior of genomes under the DCJ model, as well as experimental results on a wide variety of genome structures to exemplify the very high accuracy (and low variance) of our estimator. Our results provide a tool for accurate phylogenetic reconstruction from multichromosomal gene rearrangement data as well as a theoretical basis for refinements of the DCJ model to account for biological constraints. All of our software is available in source form under GPL at http://lcbb.epfl.ch.

  12. Multi-Model Estimation Based Moving Object Detection for Aerial Video

    Directory of Open Access Journals (Sweden)

    Yanning Zhang

    2015-04-01

    Full Text Available With the wide development of UAV (Unmanned Aerial Vehicle technology, moving target detection for aerial video has become a popular research topic in the computer field. Most of the existing methods are under the registration-detection framework and can only deal with simple background scenes. They tend to go wrong in the complex multi background scenarios, such as viaducts, buildings and trees. In this paper, we break through the single background constraint and perceive the complex scene accurately by automatic estimation of multiple background models. First, we segment the scene into several color blocks and estimate the dense optical flow. Then, we calculate an affine transformation model for each block with large area and merge the consistent models. Finally, we calculate subordinate degree to multi-background models pixel to pixel for all small area blocks. Moving objects are segmented by means of energy optimization method solved via Graph Cuts. The extensive experimental results on public aerial videos show that, due to multi background models estimation, analyzing each pixel’s subordinate relationship to multi models by energy minimization, our method can effectively remove buildings, trees and other false alarms and detect moving objects correctly.

  13. Development of GP and GEP models to estimate an environmental issue induced by blasting operation.

    Science.gov (United States)

    Faradonbeh, Roohollah Shirani; Hasanipanah, Mahdi; Amnieh, Hassan Bakhshandeh; Armaghani, Danial Jahed; Monjezi, Masoud

    2018-05-21

    Air overpressure (AOp) is one of the most adverse effects induced by blasting in the surface mines and civil projects. So, proper evaluation and estimation of the AOp is important for minimizing the environmental problems resulting from blasting. The main aim of this study is to estimate AOp produced by blasting operation in Miduk copper mine, Iran, developing two artificial intelligence models, i.e., genetic programming (GP) and gene expression programming (GEP). Then, the accuracy of the GP and GEP models has been compared to multiple linear regression (MLR) and three empirical models. For this purpose, 92 blasting events were investigated, and subsequently, the AOp values were carefully measured. Moreover, in each operation, the values of maximum charge per delay and distance from blast points, as two effective parameters on the AOp, were measured. After predicting by the predictive models, their performance prediction was checked in terms of variance account for (VAF), coefficient of determination (CoD), and root mean square error (RMSE). Finally, it was found that the GEP with VAF of 94.12%, CoD of 0.941, and RMSE of 0.06 is a more precise model than other predictive models for the AOp prediction in the Miduk copper mine, and it can be introduced as a new powerful tool for estimating the AOp resulting from blasting.

  14. Final model independent result of DAMA/LIBRA-phase1

    Energy Technology Data Exchange (ETDEWEB)

    Bernabei, R.; D' Angelo, S.; Di Marco, A. [Universita di Roma ' ' Tor Vergata' ' , Dipartimento di Fisica, Rome (Italy); INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Belli, P. [INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Cappella, F.; D' Angelo, A.; Prosperi, D. [Universita di Roma ' ' La Sapienza' ' , Dipartimento di Fisica, Rome (Italy); INFN, sez. Roma, Rome (Italy); Caracciolo, V.; Castellano, S.; Cerulli, R. [INFN, Laboratori Nazionali del Gran Sasso, Assergi (Italy); Dai, C.J.; He, H.L.; Kuang, H.H.; Ma, X.H.; Sheng, X.D.; Wang, R.G. [Chinese Academy, IHEP, Beijing (China); Incicchitti, A. [INFN, sez. Roma, Rome (Italy); Montecchia, F. [INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Universita di Roma ' ' Tor Vergata' ' , Dipartimento di Ingegneria Civile e Ingegneria Informatica, Rome (Italy); Ye, Z.P. [Chinese Academy, IHEP, Beijing (China); University of Jing Gangshan, Jiangxi (China)

    2013-12-15

    The results obtained with the total exposure of 1.04 ton x yr collected by DAMA/LIBRA-phase1 deep underground at the Gran Sasso National Laboratory (LNGS) of the I.N.F.N. during 7 annual cycles (i.e. adding a further 0.17 ton x yr exposure) are presented. The DAMA/LIBRA-phase1 data give evidence for the presence of Dark Matter (DM) particles in the galactic halo, on the basis of the exploited model independent DM annual modulation signature by using highly radio-pure NaI(Tl) target, at 7.5{sigma} C.L. Including also the first generation DAMA/NaI experiment (cumulative exposure 1.33 ton x yr, corresponding to 14 annual cycles), the C.L. is 9.3{sigma} and the modulation amplitude of the single-hit events in the (2-6) keV energy interval is: (0.0112{+-}0.0012) cpd/kg/keV; the measured phase is (144{+-}7) days and the measured period is (0.998{+-}0.002) yr, values well in agreement with those expected for DM particles. No systematic or side reaction able to mimic the exploited DM signature has been found or suggested by anyone over more than a decade. (orig.)

  15. Modelling and Evaluation of Aircraft Emissions. Final report

    International Nuclear Information System (INIS)

    Savola, M.

    1996-01-01

    An application was developed to calculate the emissions and fuel consumption of a jet and turboprop powered aircraft in Finnair's scheduled and charter traffic both globally and in the Finnish flight information regions. The emissions calculated are nitrogen oxides, unburnt hydrocarbons and carbon monoxide. The study is based on traffic statistics of one week taken from three scheduled periods in 1993. Each flight was studied by dividing the flight profile into sections. The flight profile data are based on aircraft manufacturers' manuals, and they serve as initial data for engine manufacturers' emission calculation programs. In addition, the study includes separate calculations on air traffic emissions at airports during the so-called LTO cycle. The fuel consumption calculated for individual flights is 419,395 tonnes globally, and 146,142 tonnes in the Finnish flight information regions. According to Finnair's statistics the global fuel consumption is 0.97-fold compared with the result given by the model. The results indicate that in 1993 the global nitrogen oxide emissions amounted to 5,934 tonnes, the unburnt hydrocarbon emissions totalled 496 tonnes and carbon monoxide emissions 1,664 tonnes. The corresponding emissions in the Finnish flight information regions were as follows: nitrogen oxides 2,105 tonnes, unburnt hydrocarbons 177 tonnes and carbon monoxide 693 tonnes. (orig.)

  16. Community Earth System Model (CESM) Tutorial 2016 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Lamarque, Jean-Francois [Univ. Corporation for Atmospheric Research (UCAR) and National Center for Atmospheric Research (NCAR) and Climate and Global Dynamics Laboratory (CGD), Boulder, CO (United States)

    2017-05-09

    For the 2016 tutorial, NCAR/CGD requested a total budget of $70,000 split equally between DOE and NSF. The funds were used to support student participation (travel, lodging, per diem, etc.). Lectures and practical session support was primarily provided by local participants at no additional cost (see list below). The seventh annual Community Earth System Model (CESM) tutorial (2016) for students and early career scientists was held 8 – 12 August 2016. As has been the case over the last few years, this event was extremely successful and there was greater demand than could be met. There was continued interest in support of the NSF’s EaSM Infrastructure awards, to train these awardees in the application of the CESM. Based on suggestions from previous tutorial participants, the 2016 tutorial experience again provided direct connection to Yellowstone for each individual participant (rather than pairs), and used the NCAR Mesa Library. The 2016 tutorial included lectures on simulating the climate system and practical sessions on running CESM, modifying components, and analyzing data. These were targeted to the graduate student level. In addition, specific talks (“Application” talks) were introduced this year to provide participants with some in-depth knowledge of some specific aspects of CESM.

  17. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  18. Comparison of two perturbation methods to estimate the land surface modeling uncertainty

    Science.gov (United States)

    Su, H.; Houser, P.; Tian, Y.; Kumar, S.; Geiger, J.; Belvedere, D.

    2007-12-01

    In land surface modeling, it is almost impossible to simulate the land surface processes without any error because the earth system is highly complex and the physics of the land processes has not yet been understood sufficiently. In most cases, people want to know not only the model output but also the uncertainty in the modeling, to estimate how reliable the modeling is. Ensemble perturbation is an effective way to estimate the uncertainty in land surface modeling, since land surface models are highly nonlinear which makes the analytical approach not applicable in this estimation. The ideal perturbation noise is zero mean Gaussian distribution, however, this requirement can't be satisfied if the perturbed variables in land surface model have physical boundaries because part of the perturbation noises has to be removed to feed the land surface models properly. Two different perturbation methods are employed in our study to investigate their impact on quantifying land surface modeling uncertainty base on the Land Information System (LIS) framework developed by NASA/GSFC land team. One perturbation method is the built-in algorithm named "STATIC" in LIS version 5; the other is a new perturbation algorithm which was recently developed to minimize the overall bias in the perturbation by incorporating additional information from the whole time series for the perturbed variable. The statistical properties of the perturbation noise generated by the two different algorithms are investigated thoroughly by using a large ensemble size on a NASA supercomputer and then the corresponding uncertainty estimates based on the two perturbation methods are compared. Their further impacts on data assimilation are also discussed. Finally, an optimal perturbation method is suggested.

  19. Lagrangian speckle model and tissue-motion estimation--theory.

    Science.gov (United States)

    Maurice, R L; Bertrand, M

    1999-07-01

    It is known that when a tissue is subjected to movements such as rotation, shearing, scaling, etc., changes in speckle patterns that result act as a noise source, often responsible for most of the displacement-estimate variance. From a modeling point of view, these changes can be thought of as resulting from two mechanisms: one is the motion of the speckles and the other, the alterations of their morphology. In this paper, we propose a new tissue-motion estimator to counteract these speckle decorrelation effects. The estimator is based on a Lagrangian description of the speckle motion. This description allows us to follow local characteristics of the speckle field as if they were a material property. This method leads to an analytical description of the decorrelation in a way which enables the derivation of an appropriate inverse filter for speckle restoration. The filter is appropriate for linear geometrical transformation of the scattering function (LT), i.e., a constant-strain region of interest (ROI). As the LT itself is a parameter of the filter, a tissue-motion estimator can be formulated as a nonlinear minimization problem, seeking the best match between the pre-tissue-motion image and a restored-speckle post-motion image. The method is tested, using simulated radio-frequency (RF) images of tissue undergoing axial shear.

  20. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

    Directory of Open Access Journals (Sweden)

    Guangjie Li

    2015-07-01

    Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.

  1. Reserves' potential of sedimentary basin: modeling and estimation; Potentiel de reserves d'un bassin petrolier: modelisation et estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lepez, V.

    2002-12-01

    The aim of this thesis is to build a statistical model of oil and gas fields' sizes distribution in a given sedimentary basin, for both the fields that exist in:the subsoil and those which have already been discovered. The estimation of all the parameters of the model via estimation of the density of the observations by model selection of piecewise polynomials by penalized maximum likelihood techniques enables to provide estimates of the total number of fields which are yet to be discovered, by class of size. We assume that the set of underground fields' sizes is an i.i.d. sample of unknown population with Levy-Pareto law with unknown parameter. The set of already discovered fields is a sub-sample without replacement from the previous which is 'size-biased'. The associated inclusion probabilities are to be estimated. We prove that the probability density of the observations is the product of the underlying density and of an unknown weighting function representing the sampling bias. An arbitrary partition of the sizes' interval being set (called a model), the analytical solutions of likelihood maximization enables to estimate both the parameter of the underlying Levy-Pareto law and the weighting function, which is assumed to be piecewise constant and based upon the partition. We shall add a monotonousness constraint over the latter, taking into account the fact that the bigger a field, the higher its probability of being discovered. Horvitz-Thompson-like estimators finally give the conclusion. We then allow our partitions to vary inside several classes of models and prove a model selection theorem which aims at selecting the best partition within a class, in terms of both Kuilback and Hellinger risk of the associated estimator. We conclude by simulations and various applications to real data from sedimentary basins of four continents, in order to illustrate theoretical as well as practical aspects of our model. (author)

  2. Effective Classroom Management and Instruction: An Exploration of Models. Executive Summary of Final Report.

    Science.gov (United States)

    Evertson, Carolyn M.; And Others

    A summary is presented of the final report, "Effective Classroom Management and Instruction: An Exploration of Models." The final report presents a set of linked investigations of the effects of training teachers in effective classroom management practices in a series of school-based workshops. Four purposes were addressed by the study: (1) to…

  3. Final Report: Optimal Model Complexity in Geological Carbon Sequestration: A Response Surface Uncertainty Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Ye [Univ. of Wyoming, Laramie, WY (United States)

    2018-01-17

    The critical component of a risk assessment study in evaluating GCS is an analysis of uncertainty in CO2 modeling. In such analyses, direct numerical simulation of CO2 flow and leakage requires many time-consuming model runs. Alternatively, analytical methods have been developed which allow fast and efficient estimation of CO2 storage and leakage, although restrictive assumptions on formation rock and fluid properties are employed. In this study, an intermediate approach is proposed based on the Design of Experiment and Response Surface methodology, which consists of using a limited number of numerical simulations to estimate a prediction outcome as a combination of the most influential uncertain site properties. The methodology can be implemented within a Monte Carlo framework to efficiently assess parameter and prediction uncertainty while honoring the accuracy of numerical simulations. The choice of the uncertain properties is flexible and can include geologic parameters that influence reservoir heterogeneity, engineering parameters that influence gas trapping and migration, and reactive parameters that influence the extent of fluid/rock reactions. The method was tested and verified on modeling long-term CO2 flow, non-isothermal heat transport, and CO2 dissolution storage by coupling two-phase flow with explicit miscibility calculation using an accurate equation of state that gives rise to convective mixing of formation brine variably saturated with CO2. All simulations were performed using three-dimensional high-resolution models including a target deep saline aquifer, overlying caprock, and a shallow aquifer. To evaluate the uncertainty in representing reservoir permeability, sediment hierarchy of a heterogeneous digital stratigraphy was mapped to create multiple irregularly shape stratigraphic models of decreasing geologic resolutions: heterogeneous (reference), lithofacies, depositional environment, and a (homogeneous) geologic formation. To ensure model

  4. Evapotranspiration Estimates for a Stochastic Soil-Moisture Model

    Science.gov (United States)

    Chaleeraktrakoon, Chavalit; Somsakun, Somrit

    2009-03-01

    Potential evapotranspiration is information that is necessary for applying a widely used stochastic model of soil moisture (I. Rodriguez Iturbe, A. Porporato, L. Ridolfi, V. Isham and D. R. Cox, Probabilistic modelling of water balance at a point: The role of climate, soil and vegetation, Proc. Roy. Soc. London A455 (1999) 3789-3805). An objective of the present paper is thus to find a proper estimate of the evapotranspiration for the stochastic model. This estimate is obtained by comparing the calculated soil-moisture distribution resulting from various techniques, such as Thornthwaite, Makkink, Jensen-Haise, FAO Modified Penman, and Blaney-Criddle, with an observed one. The comparison results using five sequences of daily soil-moisture for a dry season from November 2003 to April 2004 (Udornthani Province, Thailand) have indicated that all methods can be used if the weather information required is available. This is because their soil-moisture distributions are alike. In addition, the model is shown to have its ability in approximately describing the phenomenon at a weekly or biweekly time scale which is desirable for agricultural engineering applications.

  5. Correlation length estimation in a polycrystalline material model

    International Nuclear Information System (INIS)

    Simonovski, I.; Cizelj, L.

    2005-01-01

    This paper deals with the correlation length estimated from a mesoscopic model of a polycrystalline material. The correlation length can be used in some macroscopic material models as a material parameter that describes the internal length. It can be estimated directly from the strain and stress fields calculated from a finite-element model, which explicitly accounts for the selected mesoscopic features such as the random orientation, shape and size of the grains. A crystal plasticity material model was applied in the finite-element analysis. Different correlation lengths were obtained depending on the used set of crystallographic orientations. We determined that the different sets of crystallographic orientations affect the general level of the correlation length, however, as the external load is increased the behaviour of correlation length is similar in all the analyzed cases. The correlation lengths also changed with the macroscopic load. If the load is below the yield strength the correlation lengths are constant, and are slightly higher than the average grain size. The correlation length can therefore be considered as an indicator of first plastic deformations in the material. Increasing the load above the yield strength creates shear bands that temporarily increase the values of the correlation lengths calculated from the strain fields. With a further load increase the correlation lengths decrease slightly but stay above the average grain size. (author)

  6. Rainfall estimation with TFR model using Ensemble Kalman filter

    Science.gov (United States)

    Asyiqotur Rohmah, Nabila; Apriliani, Erna

    2018-03-01

    Rainfall fluctuation can affect condition of other environment, correlated with economic activity and public health. The increasing of global average temperature is influenced by the increasing of CO2 in the atmosphere, which caused climate change. Meanwhile, the forests as carbon sinks that help keep the carbon cycle and climate change mitigation. Climate change caused by rainfall intensity deviations can affect the economy of a region, and even countries. It encourages research on rainfall associated with an area of forest. In this study, the mathematics model that used is a model which describes the global temperatures, forest cover, and seasonal rainfall called the TFR (temperature, forest cover, and rainfall) model. The model will be discretized first, and then it will be estimated by the method of Ensemble Kalman Filter (EnKF). The result shows that the more ensembles used in estimation, the better the result is. Also, the accurateness of simulation result is influenced by measurement variable. If a variable is measurement data, the result of simulation is better.

  7. A Hybrid of Optical Remote Sensing and Hydrological Modeling Improves Water Balance Estimation

    Science.gov (United States)

    Gleason, Colin J.; Wada, Yoshihide; Wang, Jida

    2018-01-01

    Declining gauging infrastructure and fractious water politics have decreased available information about river flows globally. Remote sensing and water balance modeling are frequently cited as potential solutions, but these techniques largely rely on these same in-decline gauge data to make accurate discharge estimates. A different approach is therefore needed, and we here combine remotely sensed discharge estimates made via at-many-stations hydraulic geometry (AMHG) and the PCR-GLOBWB hydrological model to estimate discharge over the Lower Nile. Specifically, we first estimate initial discharges from 87 Landsat images and AMHG (1984-2015), and then use these flow estimates to tune the model, all without using gauge data. The resulting tuned modeled hydrograph shows a large improvement in flow magnitude: validation of the tuned monthly hydrograph against a historical gauge (1978-1984) yields an RMSE of 439 m3/s (40.8%). By contrast, the original simulation had an order-of-magnitude flow error. This improvement is substantial but not perfect: tuned flows have a 1-2 month wet season lag and a negative base flow bias. Accounting for this 2 month lag yields a hydrograph RMSE of 270 m3/s (25.7%). Thus, our results coupling physical models and remote sensing is a promising first step and proof of concept toward future modeling of ungauged flows, especially as developments in cloud computing for remote sensing make our method easily applicable to any basin. Finally, we purposefully do not offer prescriptive solutions for Nile management, and rather hope that the methods demonstrated herein can prove useful to river stakeholders in managing their own water.

  8. The complex model of risk and progression of AMD estimation

    Directory of Open Access Journals (Sweden)

    V. S. Akopyan

    2012-01-01

    Full Text Available Purpose: to develop a method and a statistical model to estimate individual risk of AMD and the risk for progression to advanced AMD using clinical and genetic risk factors.Methods: A statistical risk assessment model was developed using stepwise binary logistic regression analysis. to estimate the population differences in the prevalence of allelic variants of genes and for the development of models adapted to the population of Moscow region genotyping and assessment of the influence of other risk factors was performed in two groups: patients with differ- ent stages of AMD (n = 74, and control group (n = 116. Genetic risk factors included in the study: polymorphisms in the complement system genes (C3 and CFH, genes at 10q26 locus (ARMS2 and HtRA1, polymorphism in the mitochondrial gene Mt-ND2. Clinical risk factors included in the study: age, gender, high body mass index, smoking history.Results: A comprehensive analysis of genetic and clinical risk factors for AMD in the study group was performed. Compiled statis- tical model assessment of individual risk of AMD, the sensitivity of the model — 66.7%, specificity — 78.5%, AUC = 0.76. Risk factors of late AMD, compiled a statistical model describing the probability of late AMD, the sensitivity of the model — 66.7%, specificity — 78.3%, AUC = 0.73. the developed system allows determining the most likely version of the current late AMD: dry or wet.Conclusion: the developed test system and the mathematical algorhythm for determining the risk of AMD, risk of progression to advanced AMD have fair diagnostic informative and promising for use in clinical practice.

  9. [Estimation of forest canopy chlorophyll content based on PROSPECT and SAIL models].

    Science.gov (United States)

    Yang, Xi-guang; Fan, Wen-yi; Yu, Ying

    2010-11-01

    The forest canopy chlorophyll content directly reflects the health and stress of forest. The accurate estimation of the forest canopy chlorophyll content is a significant foundation for researching forest ecosystem cycle models. In the present paper, the inversion of the forest canopy chlorophyll content was based on PROSPECT and SAIL models from the physical mechanism angle. First, leaf spectrum and canopy spectrum were simulated by PROSPECT and SAIL models respectively. And leaf chlorophyll content look-up-table was established for leaf chlorophyll content retrieval. Then leaf chlorophyll content was converted into canopy chlorophyll content by Leaf Area Index (LAD). Finally, canopy chlorophyll content was estimated from Hyperion image. The results indicated that the main effect bands of chlorophyll content were 400-900 nm, the simulation of leaf and canopy spectrum by PROSPECT and SAIL models fit better with the measured spectrum with 7.06% and 16.49% relative error respectively, the RMSE of LAI inversion was 0. 542 6 and the forest canopy chlorophyll content was estimated better by PROSPECT and SAIL models with precision = 77.02%.

  10. A Hidden Markov Model for Urban-Scale Traffic Estimation Using Floating Car Data.

    Science.gov (United States)

    Wang, Xiaomeng; Peng, Ling; Chi, Tianhe; Li, Mengzhu; Yao, Xiaojing; Shao, Jing

    2015-01-01

    Urban-scale traffic monitoring plays a vital role in reducing traffic congestion. Owing to its low cost and wide coverage, floating car data (FCD) serves as a novel approach to collecting traffic data. However, sparse probe data represents the vast majority of the data available on arterial roads in most urban environments. In order to overcome the problem of data sparseness, this paper proposes a hidden Markov model (HMM)-based traffic estimation model, in which the traffic condition on a road segment is considered as a hidden state that can be estimated according to the conditions of road segments having similar traffic characteristics. An algorithm based on clustering and pattern mining rather than on adjacency relationships is proposed to find clusters with road segments having similar traffic characteristics. A multi-clustering strategy is adopted to achieve a trade-off between clustering accuracy and coverage. Finally, the proposed model is designed and implemented on the basis of a real-time algorithm. Results of experiments based on real FCD confirm the applicability, accuracy, and efficiency of the model. In addition, the results indicate that the model is practicable for traffic estimation on urban arterials and works well even when more than 70% of the probe data are missing.

  11. A software for parameter estimation in dynamic models

    Directory of Open Access Journals (Sweden)

    M. Yuceer

    2008-12-01

    Full Text Available A common problem in dynamic systems is to determine parameters in an equation used to represent experimental data. The goal is to determine the values of model parameters that provide the best fit to measured data, generally based on some type of least squares or maximum likelihood criterion. In the most general case, this requires the solution of a nonlinear and frequently non-convex optimization problem. Some of the available software lack in generality, while others do not provide ease of use. A user-interactive parameter estimation software was needed for identifying kinetic parameters. In this work we developed an integration based optimization approach to provide a solution to such problems. For easy implementation of the technique, a parameter estimation software (PARES has been developed in MATLAB environment. When tested with extensive example problems from literature, the suggested approach is proven to provide good agreement between predicted and observed data within relatively less computing time and iterations.

  12. Computer model for estimating electric utility environmental noise

    International Nuclear Information System (INIS)

    Teplitzky, A.M.; Hahn, K.J.

    1991-01-01

    This paper reports on a computer code for estimating environmental noise emissions from the operation and the construction of electric power plants that was developed based on algorithms. The computer code (Model) is used to predict octave band sound power levels for power plant operation and construction activities on the basis of the equipment operating characteristics and calculates off-site sound levels for each noise source and for an entire plant. Estimated noise levels are presented either as A-weighted sound level contours around the power plant or as octave band levels at user defined receptor locations. Calculated sound levels can be compared with user designated noise criteria, and the program can assist the user in analyzing alternative noise control strategies

  13. RSMASS: A simple model for estimating reactor and shield masses

    International Nuclear Information System (INIS)

    Marshall, A.C.; Aragon, J.; Gallup, D.

    1987-01-01

    A simple mathematical model (RSMASS) has been developed to provide rapid estimates of reactor and shield masses for space-based reactor power systems. Approximations are used rather than correlations or detailed calculations to estimate the reactor fuel mass and the masses of the moderator, structure, reflector, pressure vessel, miscellaneous components, and the reactor shield. The fuel mass is determined either by neutronics limits, thermal/hydraulic limits, or fuel damage limits, whichever yields the largest mass. RSMASS requires the reactor power and energy, 24 reactor parameters, and 20 shield parameters to be specified. This parametric approach should be applicable to a very broad range of reactor types. Reactor and shield masses calculated by RSMASS were found to be in good agreement with the masses obtained from detailed calculations

  14. Multidimensional Rank Reduction Estimator for Parametric MIMO Channel Models

    Directory of Open Access Journals (Sweden)

    Marius Pesavento

    2004-08-01

    Full Text Available A novel algebraic method for the simultaneous estimation of MIMO channel parameters from channel sounder measurements is developed. We consider a parametric multipath propagation model with P discrete paths where each path is characterized by its complex path gain, its directions of arrival and departure, time delay, and Doppler shift. This problem is treated as a special case of the multidimensional harmonic retrieval problem. While the well-known ESPRIT-type algorithms exploit shift-invariance between specific partitions of the signal matrix, the rank reduction estimator (RARE algorithm exploits their internal Vandermonde structure. A multidimensional extension of the RARE algorithm is developed, analyzed, and applied to measurement data recorded with the RUSK vector channel sounder in the 2 GHz band.

  15. Irrigation Requirement Estimation Using Vegetation Indices and Inverse Biophysical Modeling

    Science.gov (United States)

    Bounoua, Lahouari; Imhoff, Marc L.; Franks, Shannon

    2010-01-01

    We explore an inverse biophysical modeling process forced by satellite and climatological data to quantify irrigation requirements in semi-arid agricultural areas. We constrain the carbon and water cycles modeled under both equilibrium, balance between vegetation and climate, and non-equilibrium, water added through irrigation. We postulate that the degree to which irrigated dry lands vary from equilibrium climate conditions is related to the amount of irrigation. The amount of water required over and above precipitation is considered as an irrigation requirement. For July, results show that spray irrigation resulted in an additional amount of water of 1.3 mm per occurrence with a frequency of 24.6 hours. In contrast, the drip irrigation required only 0.6 mm every 45.6 hours or 46% of that simulated by the spray irrigation. The modeled estimates account for 87% of the total reported irrigation water use, when soil salinity is not important and 66% in saline lands.

  16. Use of econometric models to estimate expenditure shares.

    Science.gov (United States)

    Trogdon, Justin G; Finkelstein, Eric A; Hoerger, Thomas J

    2008-08-01

    To investigate the use of regression models to calculate disease-specific shares of medical expenditures. Medical Expenditure Panel Survey (MEPS), 2000-2003. Theoretical investigation and secondary data analysis. Condition files used to define the presence of 10 medical conditions. Incremental effects of conditions on expenditures, expressed as a fraction of total expenditures, cannot generally be interpreted as shares. When the presence of one condition increases treatment costs for another condition, summing condition-specific shares leads to double-counting of expenditures. Condition-specific shares generated from multiplicative models should not be summed. We provide an algorithm that allows estimates based on these models to be interpreted as shares and summed across conditions.

  17. GNSS Positioning Performance Analysis Using PSO-RBF Estimation Model

    Directory of Open Access Journals (Sweden)

    Jgouta Meriem

    2017-06-01

    Full Text Available Positioning solutions need to be more precise and available. The most frequent method used nowadays includes a GPS receiver, sometimes supported by other sensors. Generally, GPS and GNSS suffer from spreading perturbations that produce biases on pseudo-range measurements. With a view to optimize the use of the satellites received, we offer a positioning algorithm with pseudo range error modelling with the contribution of an appropriate filtering process. Extended Kalman Filter, The Rao- Blackwellized filter are among the most widely used algorithms to predict errors and to filter the high frequency noise. This paper describes a new method of estimating the pseudo-range errors based on the PSO-RBF model which achieves an optimal training criterion. This model is appropriate of its method to predict the GPS corrections for accurate positioning, it reduce the positioning errors at high velocities by more than 50% compared to the RLS or EKF methods.

  18. Dynamic systems models new methods of parameter and state estimation

    CERN Document Server

    2016-01-01

    This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...

  19. A quasi-independence model to estimate failure rates

    International Nuclear Information System (INIS)

    Colombo, A.G.

    1988-01-01

    The use of a quasi-independence model to estimate failure rates is investigated. Gate valves of nuclear plants are considered, and two qualitative covariates are taken into account: plant location and reactor system. Independence between the two covariates and an exponential failure model are assumed. The failure rate of the components of a given system and plant is assumed to be a constant, but it may vary from one system to another and from one plant to another. This leads to the analysis of a contingency table. A particular feature of the model is the different operating time of the components in the various cells which can also be equal to zero. The concept of independence of the covariates is then replaced by that of quasi-independence. The latter definition, however, is used in a broader sense than usual. Suitable statistical tests are discussed and a numerical example illustrates the use of the method. (author)

  20. Integrated traffic conflict model for estimating crash modification factors.

    Science.gov (United States)

    Shahdah, Usama; Saccomanno, Frank; Persaud, Bhagwant

    2014-10-01

    Crash modification factors (CMFs) for road safety treatments are usually obtained through observational models based on reported crashes. Observational Bayesian before-and-after methods have been applied to obtain more precise estimates of CMFs by accounting for the regression-to-the-mean bias inherent in naive methods. However, sufficient crash data reported over an extended period of time are needed to provide reliable estimates of treatment effects, a requirement that can be a challenge for certain types of treatment. In addition, these studies require that sites analyzed actually receive the treatment to which the CMF pertains. Another key issue with observational approaches is that they are not causal in nature, and as such, cannot provide a sound "behavioral" rationale for the treatment effect. Surrogate safety measures based on high risk vehicle interactions and traffic conflicts have been proposed to address this issue by providing a more "causal perspective" on lack of safety for different road and traffic conditions. The traffic conflict approach has been criticized, however, for lacking a formal link to observed and verified crashes, a difficulty that this paper attempts to resolve by presenting and investigating an alternative approach for estimating CMFs using simulated conflicts that are linked formally to observed crashes. The integrated CMF estimates are compared to estimates from an empirical Bayes (EB) crash-based before-and-after analysis for the same sample of treatment sites. The treatment considered involves changing left turn signal priority at Toronto signalized intersections from permissive to protected-permissive. The results are promising in that the proposed integrated method yields CMFs that closely match those obtained from the crash-based EB before-and-after analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Model Orlando regionally efficient travel management coordination center (MORE TMCC), phase II : final report.

    Science.gov (United States)

    2012-09-01

    The final report for the Model Orlando Regionally Efficient Travel Management Coordination Center (MORE TMCC) presents the details of : the 2-year process of the partial deployment of the original MORE TMCC design created in Phase I of this project...

  2. Test models for improving filtering with model errors through stochastic parameter estimation

    International Nuclear Information System (INIS)

    Gershgorin, B.; Harlim, J.; Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  3. Estimating confidence intervals in predicted responses for oscillatory biological models.

    Science.gov (United States)

    St John, Peter C; Doyle, Francis J

    2013-07-29

    The dynamics of gene regulation play a crucial role in a cellular control: allowing the cell to express the right proteins to meet changing needs. Some needs, such as correctly anticipating the day-night cycle, require complicated oscillatory features. In the analysis of gene regulatory networks, mathematical models are frequently used to understand how a network's structure enables it to respond appropriately to external inputs. These models typically consist of a set of ordinary differential equations, describing a network of biochemical reactions, and unknown kinetic parameters, chosen such that the model best captures experimental data. However, since a model's parameter values are uncertain, and since dynamic responses to inputs are highly parameter-dependent, it is difficult to assess the confidence associated with these in silico predictions. In particular, models with complex dynamics - such as oscillations - must be fit with computationally expensive global optimization routines, and cannot take advantage of existing measures of identifiability. Despite their difficulty to model mathematically, limit cycle oscillations play a key role in many biological processes, including cell cycling, metabolism, neuron firing, and circadian rhythms. In this study, we employ an efficient parameter estimation technique to enable a bootstrap uncertainty analysis for limit cycle models. Since the primary role of systems biology models is the insight they provide on responses to rate perturbations, we extend our uncertainty analysis to include first order sensitivity coefficients. Using a literature model of circadian rhythms, we show how predictive precision is degraded with decreasing sample points and increasing relative error. Additionally, we show how this method can be used for model discrimination by comparing the output identifiability of two candidate model structures to published literature data. Our method permits modellers of oscillatory systems to confidently

  4. Micro, nanosystems and systems on chips modeling, control, and estimation

    CERN Document Server

    Voda, Alina

    2013-01-01

    Micro and nanosystems represent a major scientific and technological challenge, with actual and potential applications in almost all fields of the human activity. The aim of the present book is to present how concepts from dynamical control systems (modeling, estimation, observation, identification, feedback control) can be adapted and applied to the development of original very small-scale systems and of their human interfaces. The application fields presented here come from micro and nanorobotics, biochips, near-field microscopy (AFM and STM) and nanosystems networks. Alina Voda has drawn co

  5. Estimation for a Weibull accelerated life testing model

    International Nuclear Information System (INIS)

    Glaser, R.E.

    1984-01-01

    It is sometimes reasonable to assume that the lifetime distribution of an item belongs to a certain parametric family, and that actual parameter values depend upon the testing environment of the item. In the two-parameter Weibull family setting, suppose both the shape and scale parameters are expressible as functions of the testing environment. For various models of functional dependency on environment, maximum likelihood methods are used to estimate characteristics of interest at specified environmental levels. The methodology presented handles exact, censored, and grouped data. A detailed accelerated life testing analysis of stress-rupture data for Kevlar/epoxy composites is given. 10 references, 1 figure, 2 tables

  6. LBM estimation of thermal conductivity in meso-scale modelling

    International Nuclear Information System (INIS)

    Grucelski, A

    2016-01-01

    Recently, there is a growing engineering interest in more rigorous prediction of effective transport coefficients for multicomponent, geometrically complex materials. We present main assumptions and constituents of the meso-scale model for the simulation of the coal or biomass devolatilisation with the Lattice Boltzmann method. For the results, the estimated values of the thermal conductivity coefficient of coal (solids), pyrolytic gases and air matrix are presented for a non-steady state with account for chemical reactions in fluid flow and heat transfer. (paper)

  7. A Descriptive Evaluation of Automated Software Cost-Estimation Models,

    Science.gov (United States)

    1986-10-01

    Version 1.03D) * PCOC (Version 7.01) - PRICE S • SLIM (Version 1.1) • SoftCost (Version 5. 1) * SPQR /20 (Version 1. 1) - WICOMO (Version 1.3) These...produce detailed GANTT and PERT charts. SPQR /20 is based on a cost model developed at ITT. In addition to cost, schedule, and staffing estimates, it...cases and test runs required, and the effectiveness of pre-test and test activities. SPQR /20 also predicts enhancement and maintenance activities. C

  8. Data-Driven Model Uncertainty Estimation in Hydrologic Data Assimilation

    Science.gov (United States)

    Pathiraja, S.; Moradkhani, H.; Marshall, L.; Sharma, A.; Geenens, G.

    2018-02-01

    The increasing availability of earth observations necessitates mathematical methods to optimally combine such data with hydrologic models. Several algorithms exist for such purposes, under the umbrella of data assimilation (DA). However, DA methods are often applied in a suboptimal fashion for complex real-world problems, due largely to several practical implementation issues. One such issue is error characterization, which is known to be critical for a successful assimilation. Mischaracterized errors lead to suboptimal forecasts, and in the worst case, to degraded estimates even compared to the no assimilation case. Model uncertainty characterization has received little attention relative to other aspects of DA science. Traditional methods rely on subjective, ad hoc tuning factors or parametric distribution assumptions that may not always be applicable. We propose a novel data-driven approach (named SDMU) to model uncertainty characterization for DA studies where (1) the system states are partially observed and (2) minimal prior knowledge of the model error processes is available, except that the errors display state dependence. It includes an approach for estimating the uncertainty in hidden model states, with the end goal of improving predictions of observed variables. The SDMU is therefore suited to DA studies where the observed variables are of primary interest. Its efficacy is demonstrated through a synthetic case study with low-dimensional chaotic dynamics and a real hydrologic experiment for one-day-ahead streamflow forecasting. In both experiments, the proposed method leads to substantial improvements in the hidden states and observed system outputs over a standard method involving perturbation with Gaussian noise.

  9. On the Relationships between Jeffreys Modal and Weighted Likelihood Estimation of Ability under Logistic IRT Models

    Science.gov (United States)

    Magis, David; Raiche, Gilles

    2012-01-01

    This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…

  10. Estimating climate change impact on irrigation demand using integrated modelling

    International Nuclear Information System (INIS)

    Zupanc, Vesna; Pintar, Marina

    2004-01-01

    Water is basic element in agriculture, and along with the soil characteristics, it remains the essential for the growth and evolution of plants. Trends of air temperature and precipitation for Slovenia indicate the increase of the air temperature and reduction of precipitation during the vegetation period, which will have a substantial impact on rural economy in Slovenia. The impact of climate change will be substantial for soil the water balance. Distinctive drought periods in past years had great impact on rural plants in light soils. Climate change will most probably also result in drought in soils which otherwise provide optimal water supply for plants. Water balance in the cross section of the rooting depth is significant for the agriculture. Mathematical models enable smaller amount of measurements in a certain area by means of measurements carried out only in characteristic points serving for verification and calibration of the model. Combination of on site measurements and mathematical modelling proved to be an efficient method for understanding of processes in nature. Climate scenarios made for the estimation of the impact of climate change are based on the general circulation models. A study based on a hundred year set of monthly data showed that in Slovenia temperature would increase at min. by 2.3 o C, and by 5.6 o C at max and by 4.5 o C in average. Valid methodology for the estimate of the impact of climate change applies the model using a basic set of data for a thirty year period (1961-1990) and a changed set of climate input parameters on one hand, and, on the other, a comparison of output results of the model. Estimating climate change impact on irrigation demand for West Slovenia for peaches and nectarines grown on Cambisols and Fluvisols was made using computer model SWAP. SWAP is a precise and power too[ for the estimation of elements of soil water balance at the level of cross section of the monitored and studied profile from the soil surface

  11. Transport in biosphere of radionuclides released from finally disposed nuclear waste - background information for transport and dose model

    International Nuclear Information System (INIS)

    Hulmi, R.; Savolainen, I.

    1981-07-01

    An outline is made about the biosphere transport and dose models employed in the estimation of doses due to releases from finally disposed nuclear waste. The models often divide into two parts; the first one describes the transport of radionuclides in those parts of biosphere where the time scale is large (e.g. soil, sea and sea sediment), the second part of the model describes the transport of nuclides in the systems where the time scale is small (e.g. food chains, plants and animals). The description of biosphere conditions includes remarkable uncertainty due to the complexity of the biosphere and its ecosystems. Therefore studies of scenario type are recommended: some values of parametres describing the conditions are assumed, and the consequences are estimated by using these values. The effect of uncertainty in various factors on the uncertainty of final results should be investigated with the employment of alternative scenarios and parametric sensitivity studies. In addition to the ordinary results, intermediate results should be presented. A proposal for the structure of a transport and dose program based on dynamic linear compartment model is presented and mathematical solution alternatives are studied also

  12. Small Area Model-Based Estimators Using Big Data Sources

    Directory of Open Access Journals (Sweden)

    Marchetti Stefano

    2015-06-01

    Full Text Available The timely, accurate monitoring of social indicators, such as poverty or inequality, on a finegrained spatial and temporal scale is a crucial tool for understanding social phenomena and policymaking, but poses a great challenge to official statistics. This article argues that an interdisciplinary approach, combining the body of statistical research in small area estimation with the body of research in social data mining based on Big Data, can provide novel means to tackle this problem successfully. Big Data derived from the digital crumbs that humans leave behind in their daily activities are in fact providing ever more accurate proxies of social life. Social data mining from these data, coupled with advanced model-based techniques for fine-grained estimates, have the potential to provide a novel microscope through which to view and understand social complexity. This article suggests three ways to use Big Data together with small area estimation techniques, and shows how Big Data has the potential to mirror aspects of well-being and other socioeconomic phenomena.

  13. Estimating adolescent sleep need using dose-response modeling.

    Science.gov (United States)

    Short, Michelle A; Weber, Nathan; Reynolds, Chelsea; Coussens, Scott; Carskadon, Mary A

    2018-04-01

    This study will (1) estimate the nightly sleep need of human adolescents, (2) determine the time course and severity of sleep-related deficits when sleep is reduced below this optimal quantity, and (3) determine whether sleep restriction perturbs the circadian system as well as the sleep homeostat. Thirty-four adolescents aged 15 to 17 years spent 10 days and nine nights in the sleep laboratory. Between two baseline nights and two recovery nights with 10 hours' time in bed (TIB) per night, participants experienced either severe sleep restriction (5-hour TIB), moderate sleep restriction (7.5-hour TIB), or no sleep restriction (10-hour TIB) for five nights. A 10-minute psychomotor vigilance task (PVT; lapse = response after 500 ms) and the Karolinska Sleepiness Scale were administered every 3 hours during wake. Salivary dim-light melatonin onset was calculated at baseline and after four nights of each sleep dose to estimate circadian phase. Dose-dependent deficits to sleep duration, circadian phase timing, lapses of attention, and subjective sleepiness occurred. Less TIB resulted in less sleep, more lapses of attention, greater subjective sleepiness, and larger circadian phase delays. Sleep need estimated from 10-hour TIB sleep opportunities was approximately 9 hours, while modeling PVT lapse data suggested that 9.35 hours of sleep is needed to maintain optimal sustained attention performance. Sleep restriction perturbs homeostatic and circadian systems, leading to dose-dependent deficits to sustained attention and sleepiness. Adolescents require more sleep for optimal functioning than typically obtained.

  14. Oracle estimation of parametric models under boundary constraints.

    Science.gov (United States)

    Wong, Kin Yau; Goldberg, Yair; Fine, Jason P

    2016-12-01

    In many classical estimation problems, the parameter space has a boundary. In most cases, the standard asymptotic properties of the estimator do not hold when some of the underlying true parameters lie on the boundary. However, without knowledge of the true parameter values, confidence intervals constructed assuming that the parameters lie in the interior are generally over-conservative. A penalized estimation method is proposed in this article to address this issue. An adaptive lasso procedure is employed to shrink the parameters to the boundary, yielding oracle inference which adapt to whether or not the true parameters are on the boundary. When the true parameters are on the boundary, the inference is equivalent to that which would be achieved with a priori knowledge of the boundary, while if the converse is true, the inference is equivalent to that which is obtained in the interior of the parameter space. The method is demonstrated under two practical scenarios, namely the frailty survival model and linear regression with order-restricted parameters. Simulation studies and real data analyses show that the method performs well with realistic sample sizes and exhibits certain advantages over standard methods. © 2016, The International Biometric Society.

  15. Diffuse solar radiation estimation models for Turkey's big cities

    International Nuclear Information System (INIS)

    Ulgen, Koray; Hepbasli, Arif

    2009-01-01

    A reasonably accurate knowledge of the availability of the solar resource at any place is required by solar engineers, architects, agriculturists, and hydrologists in many applications of solar energy such as solar furnaces, concentrating collectors, and interior illumination of buildings. For this purpose, in the past, various empirical models (or correlations) have been developed in order to estimate the solar radiation around the world. This study deals with diffuse solar radiation estimation models along with statistical test methods used to statistically evaluate their performance. Models used to predict monthly average daily values of diffuse solar radiation are classified in four groups as follows: (i) From the diffuse fraction or cloudness index, function of the clearness index, (ii) From the diffuse fraction or cloudness index, function of the relative sunshine duration or sunshine fraction, (iii) From the diffuse coefficient, function of the clearness index, and (iv) From the diffuse coefficient, function of the relative sunshine duration or sunshine fraction. Empirical correlations are also developed to establish a relationship between the monthly average daily diffuse fraction or cloudness index (K d ) and monthly average daily diffuse coefficient (K dd ) with the monthly average daily clearness index (K T ) and monthly average daily sunshine fraction (S/S o ) for the three big cities by population in Turkey (Istanbul, Ankara and Izmir). Although the global solar radiation on a horizontal surface and sunshine duration has been measured by the Turkish State Meteorological Service (STMS) over all country since 1964, the diffuse solar radiation has not been measured. The eight new models for estimating the monthly average daily diffuse solar radiation on a horizontal surface in three big cites are validated, and thus, the most accurate model is selected for guiding future projects. The new models are then compared with the 32 models available in the

  16. Parameter Estimation of a Delay Time Model of Wearing Parts Based on Objective Data

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2015-01-01

    Full Text Available The wearing parts of a system have a very high failure frequency, making it necessary to carry out continual functional inspections and maintenance to protect the system from unscheduled downtime. This allows for the collection of a large amount of maintenance data. Taking the unique characteristics of the wearing parts into consideration, we establish their respective delay time models in ideal inspection cases and nonideal inspection cases. The model parameters are estimated entirely using the collected maintenance data. Then, a likelihood function of all renewal events is derived based on their occurring probability functions, and the model parameters are calculated with the maximum likelihood function method, which is solved by the CRM. Finally, using two wearing parts from the oil and gas drilling industry as examples—the filter element and the blowout preventer rubber core—the parameters of the distribution function of the initial failure time and the delay time for each example are estimated, and their distribution functions are obtained. Such parameter estimation based on objective data will contribute to the optimization of the reasonable function inspection interval and will also provide some theoretical models to support the integrity management of equipment or systems.

  17. SEMI-EMPIRICAL WHITE DWARF INITIAL-FINAL MASS RELATIONSHIPS: A THOROUGH ANALYSIS OF SYSTEMATIC UNCERTAINTIES DUE TO STELLAR EVOLUTION MODELS

    International Nuclear Information System (INIS)

    Salaris, Maurizio; Serenelli, Aldo; Weiss, Achim; Miller Bertolami, Marcelo

    2009-01-01

    Using the most recent results about white dwarfs (WDs) in ten open clusters, we revisit semiempirical estimates of the initial-final mass relation (IFMR) in star clusters, with emphasis on the use of stellar evolution models. We discuss the influence of these models on each step of the derivation. One intention of our work is to use consistent sets of calculations both for the isochrones and the WD cooling tracks. The second one is to derive the range of systematic errors arising from stellar evolution theory. This is achieved by using different sources for the stellar models and by varying physical assumptions and input data. We find that systematic errors, including the determination of the cluster age, are dominating the initial mass values, while observational uncertainties influence the final mass primarily. After having determined the systematic errors, the initial-final mass relation allows us finally to draw conclusions about the physics of the stellar models, in particular about convective overshooting.

  18. Scaling analysis and model estimation of solar corona index

    Science.gov (United States)

    Ray, Samujjwal; Ray, Rajdeep; Khondekar, Mofazzal Hossain; Ghosh, Koushik

    2018-04-01

    A monthly average solar green coronal index time series for the period from January 1939 to December 2008 collected from NOAA (The National Oceanic and Atmospheric Administration) has been analysed in this paper in perspective of scaling analysis and modelling. Smoothed and de-noising have been done using suitable mother wavelet as a pre-requisite. The Finite Variance Scaling Method (FVSM), Higuchi method, rescaled range (R/S) and a generalized method have been applied to calculate the scaling exponents and fractal dimensions of the time series. Autocorrelation function (ACF) is used to find autoregressive (AR) process and Partial autocorrelation function (PACF) has been used to get the order of AR model. Finally a best fit model has been proposed using Yule-Walker Method with supporting results of goodness of fit and wavelet spectrum. The results reveal an anti-persistent, Short Range Dependent (SRD), self-similar property with signatures of non-causality, non-stationarity and nonlinearity in the data series. The model shows the best fit to the data under observation.

  19. Research on parafoil stability using a rapid estimate model

    Directory of Open Access Journals (Sweden)

    Hua YANG

    2017-10-01

    Full Text Available With the consideration of rotation between canopy and payload of parafoil system, a four-degree-of-freedom (4-DOF longitudinal static model was used to solve parafoil state variables in straight steady flight. The aerodynamic solution of parafoil system was a combination of vortex lattice method (VLM and engineering estimation method. Based on small disturbance assumption, a 6-DOF linear model that considers canopy additional mass was established with benchmark state calculated by 4-DOF static model. Modal analysis of a dynamic model was used to calculate the stability parameters. This method, which is based on a small disturbance linear model and modal analysis, is high-efficiency to the study of parafoil stability. It is well suited for rapid stability analysis in the preliminary stage of parafoil design. Using this method, this paper shows that longitudinal and lateral stability will both decrease when a steady climbing angle increases. This explains the wavy track of the parafoil observed during climbing.

  20. Comparison of different models for non-invasive FFR estimation

    Science.gov (United States)

    Mirramezani, Mehran; Shadden, Shawn

    2017-11-01

    Coronary artery disease is a leading cause of death worldwide. Fractional flow reserve (FFR), derived from invasively measuring the pressure drop across a stenosis, is considered the gold standard to diagnose disease severity and need for treatment. Non-invasive estimation of FFR has gained recent attention for its potential to reduce patient risk and procedural cost versus invasive FFR measurement. Non-invasive FFR can be obtained by using image-based computational fluid dynamics to simulate blood flow and pressure in a patient-specific coronary model. However, 3D simulations require extensive effort for model construction and numerical computation, which limits their routine use. In this study we compare (ordered by increasing computational cost/complexity): reduced-order algebraic models of pressure drop across a stenosis; 1D, 2D (multiring) and 3D CFD models; as well as 3D FSI for the computation of FFR in idealized and patient-specific stenosis geometries. We demonstrate the ability of an appropriate reduced order algebraic model to closely predict FFR when compared to FFR from a full 3D simulation. This work was supported by the NIH, Grant No. R01-HL103419.

  1. Principles of parametric estimation in modeling language competition.

    Science.gov (United States)

    Zhang, Menghan; Gong, Tao

    2013-06-11

    It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.

  2. Estimation of landfill emission lifespan using process oriented modeling

    International Nuclear Information System (INIS)

    Ustohalova, Veronika; Ricken, Tim; Widmann, Renatus

    2006-01-01

    Depending on the particular pollutants emitted, landfills may require service activities lasting from hundreds to thousands of years. Flexible tools allowing long-term predictions of emissions are of key importance to determine the nature and expected duration of maintenance and post-closure activities. A highly capable option represents predictions based on models and verified by experiments that are fast, flexible and allow for the comparison of various possible operation scenarios in order to find the most appropriate one. The intention of the presented work was to develop a experimentally verified multi-dimensional predictive model capable of quantifying and estimating processes taking place in landfill sites where coupled process description allows precise time and space resolution. This constitutive 2-dimensional model is based on the macromechanical theory of porous media (TPM) for a saturated thermo-elastic porous body. The model was used to simulate simultaneously occurring processes: organic phase transition, gas emissions, heat transport, and settlement behavior on a long time scale for municipal solid waste deposited in a landfill. The relationships between the properties (composition, pore structure) of a landfill and the conversion and multi-phase transport phenomena inside it were experimentally determined. In this paper, we present both the theoretical background of the model and the results of the simulations at one single point as well as in a vertical landfill cross section

  3. Models for estimating the radiation hazards of uranium mines

    International Nuclear Information System (INIS)

    Wise, K.N.

    1982-01-01

    Hazards to the health of workers in uranium mines derive from the decay products of radon and from uranium and its descendants. Radon daughters in mine atmospheres are either attached to aerosols or exist as free atoms and their physical state determines in which part of the lung the daughters deposit. The factors which influence the proportions of radon daughters attached to aerosols, their deposition in the lung and the dose received by the cells in lung tissue are discussed. The estimation of dose to tissue from inhalation or ingestion of uranium and daughters is based on a different set of models which have been applied in recent ICRP reports. The models used to describe the deposition of particulates, their movement in the gut and their uptake by organs, which form the basis for future limits on the concentration of uranium and daughters in air or on their intake with food, are outlined

  4. Models for estimating the radiation hazards of uranium mines

    International Nuclear Information System (INIS)

    Wise, K.N.

    1990-01-01

    Hazards to the health of workers in uranium mines derive from the decay products of radon and from uranium and its descendants. Radon daughters in mine atmospheres are either attached to aerosols or exist as free atoms and their physical state determines in which part of the lung the daughters deposit. The factors which influence the proportions of radon daughters attached to aerosols, their deposition in the lung and the dose received by the cells in lung tissue are discussed. The estimation of dose to tissue from inhalation of ingestion or uranium and daughters is based on a different set of models which have been applied in recent ICRP reports. The models used to describe the deposition of particulates, their movement in the gut and their uptake by organs, which form the basis for future limits on the concentration of uranium and daughters in air or on their intake with food, are outlined. 34 refs., 12 tabs., 9 figs

  5. Static models, recursive estimators and the zero-variance approach

    KAUST Repository

    Rubino, Gerardo

    2016-01-07

    When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.

  6. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    Directory of Open Access Journals (Sweden)

    R. Locatelli

    2013-10-01

    Full Text Available A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10 synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr−1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr−1 in North America to 7 Tg yr−1 in Boreal Eurasia (from 23 to 48%, respectively. At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly

  7. Modeling reactive transport with particle tracking and kernel estimators

    Science.gov (United States)

    Rahbaralam, Maryam; Fernandez-Garcia, Daniel; Sanchez-Vila, Xavier

    2015-04-01

    Groundwater reactive transport models are useful to assess and quantify the fate and transport of contaminants in subsurface media and are an essential tool for the analysis of coupled physical, chemical, and biological processes in Earth Systems. Particle Tracking Method (PTM) provides a computationally efficient and adaptable approach to solve the solute transport partial differential equation. On a molecular level, chemical reactions are the result of collisions, combinations, and/or decay of different species. For a well-mixed system, the chem- ical reactions are controlled by the classical thermodynamic rate coefficient. Each of these actions occurs with some probability that is a function of solute concentrations. PTM is based on considering that each particle actually represents a group of molecules. To properly simulate this system, an infinite number of particles is required, which is computationally unfeasible. On the other hand, a finite number of particles lead to a poor-mixed system which is limited by diffusion. Recent works have used this effect to actually model incomplete mix- ing in naturally occurring porous media. In this work, we demonstrate that this effect in most cases should be attributed to a defficient estimation of the concentrations and not to the occurrence of true incomplete mixing processes in porous media. To illustrate this, we show that a Kernel Density Estimation (KDE) of the concentrations can approach the well-mixed solution with a limited number of particles. KDEs provide weighting functions of each particle mass that expands its region of influence, hence providing a wider region for chemical reactions with time. Simulation results show that KDEs are powerful tools to improve state-of-the-art simulations of chemical reactions and indicates that incomplete mixing in diluted systems should be modeled based on alternative conceptual models and not on a limited number of particles.

  8. Results and Error Estimates from GRACE Forward Modeling over Antarctica

    Science.gov (United States)

    Bonin, Jennifer; Chambers, Don

    2013-04-01

    Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Antarctica. However when tested previously, the least squares technique has required constraints in the form of added process noise in order to be reliable. Poor choice of local basin layout has also adversely affected results, as has the choice of spatial smoothing used with GRACE. To develop design parameters which will result in correct high-resolution mass detection and to estimate the systematic errors of the method over Antarctica, we use a "truth" simulation of the Antarctic signal. We apply the optimal parameters found from the simulation to RL05 GRACE data across Antarctica and the surrounding ocean. We particularly focus on separating the Antarctic peninsula's mass signal from that of the rest of western Antarctica. Additionally, we characterize how well the technique works for removing land leakage signal from the nearby ocean, particularly that near the Drake Passage.

  9. Uncertainty estimation of the velocity model for stations of the TrigNet GPS network

    Science.gov (United States)

    Hackl, M.; Malservisi, R.; Hugentobler, U.

    2010-12-01

    Satellite based geodetic techniques - above all GPS - provide an outstanding tool to measure crustal motions. They are widely used to derive geodetic velocity models that are applied in geodynamics to determine rotations of tectonic blocks, to localize active geological features, and to estimate rheological properties of the crust and the underlying asthenosphere. However, it is not a trivial task to derive GPS velocities and their uncertainties from positioning time series. In general time series are assumed to be represented by linear models (sometimes offsets, annual, and semi-annual signals are included) and noise. It has been shown that error models accounting only for white noise tend to underestimate the uncertainties of rates derived from long time series and that different colored noise components (flicker noise, random walk, etc.) need to be considered. However, a thorough error analysis including power spectra analyses and maximum likelihood estimates is computationally expensive and is usually not carried out for every site, but the uncertainties are scaled by latitude dependent factors. Analyses of the South Africa continuous GPS network TrigNet indicate that the scaled uncertainties overestimate the velocity errors. So we applied a method similar to the Allan Variance that is commonly used in the estimation of clock uncertainties and is able to account for time dependent probability density functions (colored noise) to the TrigNet time series. Comparisons with synthetic data show that the noise can be represented quite well by a power law model in combination with a seasonal signal in agreement with previous studies, which allows for a reliable estimation of the velocity error. Finally, we compared these estimates to the results obtained by spectral analyses using CATS. Small differences may originate from non-normal distribution of the noise.

  10. Explicit estimating equations for semiparametric generalized linear latent variable models

    KAUST Repository

    Ma, Yanyuan; Genton, Marc G.

    2010-01-01

    which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n

  11. Estimating the complexity of 3D structural models using machine learning methods

    Science.gov (United States)

    Mejía-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques

    2016-04-01

    Quantifying the complexity of 3D geological structural models can play a major role in natural resources exploration surveys, for predicting environmental hazards or for forecasting fossil resources. This paper proposes a structural complexity index which can be used to help in defining the degree of effort necessary to build a 3D model for a given degree of confidence, and also to identify locations where addition efforts are required to meet a given acceptable risk of uncertainty. In this work, it is considered that the structural complexity index can be estimated using machine learning methods on raw geo-data. More precisely, the metrics for measuring the complexity can be approximated as the difficulty degree associated to the prediction of the geological objects distribution calculated based on partial information on the actual structural distribution of materials. The proposed methodology is tested on a set of 3D synthetic structural models for which the degree of effort during their building is assessed using various parameters (such as number of faults, number of part in a surface object, number of borders, ...), the rank of geological elements contained in each model, and, finally, their level of deformation (folding and faulting). The results show how the estimated complexity in a 3D model can be approximated by the quantity of partial data necessaries to simulated at a given precision the actual 3D model without error using machine learning algorithms.

  12. Parameter estimation of component reliability models in PSA model of Krsko NPP

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Vrbanic, I.

    2001-01-01

    In the paper, the uncertainty analysis of component reliability models for independent failures is shown. The present approach for parameter estimation of component reliability models in NPP Krsko is presented. Mathematical approaches for different types of uncertainty analyses are introduced and used in accordance with some predisposed requirements. Results of the uncertainty analyses are shown in an example for time-related components. As the most appropriate uncertainty analysis proved the Bayesian estimation with the numerical estimation of a posterior, which can be approximated with some appropriate probability distribution, in this paper with lognormal distribution.(author)

  13. Final Technical Report: "Representing Endogenous Technological Change in Climate Policy Models: General Equilibrium Approaches"

    Energy Technology Data Exchange (ETDEWEB)

    Ian Sue Wing

    2006-04-18

    The research supported by this award pursued three lines of inquiry: (1) The construction of dynamic general equilibrium models to simulate the accumulation and substitution of knowledge, which has resulted in the preparation and submission of several papers: (a) A submitted pedagogic paper which clarifies the structure and operation of computable general equilibrium (CGE) models (C.2), and a review article in press which develops a taxonomy for understanding the representation of technical change in economic and engineering models for climate policy analysis (B.3). (b) A paper which models knowledge directly as a homogeneous factor, and demonstrates that inter-sectoral reallocation of knowledge is the key margin of adjustment which enables induced technical change to lower the costs of climate policy (C.1). (c) An empirical paper which estimates the contribution of embodied knowledge to aggregate energy intensity in the U.S. (C.3), followed by a companion article which embeds these results within a CGE model to understand the degree to which autonomous energy efficiency improvement (AEEI) is attributable to technical change as opposed to sub-sectoral shifts in industrial composition (C.4) (d) Finally, ongoing theoretical work to characterize the precursors and implications of the response of innovation to emission limits (E.2). (2) Data development and simulation modeling to understand how the characteristics of discrete energy supply technologies determine their succession in response to emission limits when they are embedded within a general equilibrium framework. This work has produced two peer-reviewed articles which are currently in press (B.1 and B.2). (3) Empirical investigation of trade as an avenue for the transmission of technological change to developing countries, and its implications for leakage, which has resulted in an econometric study which is being revised for submission to a journal (E.1). As work commenced on this topic, the U.S. withdrawal

  14. The relative pose estimation of aircraft based on contour model

    Science.gov (United States)

    Fu, Tai; Sun, Xiangyi

    2017-02-01

    This paper proposes a relative pose estimation approach based on object contour model. The first step is to obtain a two-dimensional (2D) projection of three-dimensional (3D)-model-based target, which will be divided into 40 forms by clustering and LDA analysis. Then we proceed by extracting the target contour in each image and computing their Pseudo-Zernike Moments (PZM), thus a model library is constructed in an offline mode. Next, we spot a projection contour that resembles the target silhouette most in the present image from the model library with reference of PZM; then similarity transformation parameters are generated as the shape context is applied to match the silhouette sampling location, from which the identification parameters of target can be further derived. Identification parameters are converted to relative pose parameters, in the premise that these values are the initial result calculated via iterative refinement algorithm, as the relative pose parameter is in the neighborhood of actual ones. At last, Distance Image Iterative Least Squares (DI-ILS) is employed to acquire the ultimate relative pose parameters.

  15. Uncertainty Model for Total Solar Irradiance Estimation on Australian Rooftops

    Science.gov (United States)

    Al-Saadi, Hassan; Zivanovic, Rastko; Al-Sarawi, Said

    2017-11-01

    The installations of solar panels on Australian rooftops have been in rise for the last few years, especially in the urban areas. This motivates academic researchers, distribution network operators and engineers to accurately address the level of uncertainty resulting from grid-connected solar panels. The main source of uncertainty is the intermittent nature of radiation, therefore, this paper presents a new model to estimate the total radiation incident on a tilted solar panel. Where a probability distribution factorizes clearness index, the model is driven upon clearness index with special attention being paid for Australia with the utilization of best-fit-correlation for diffuse fraction. The assessment of the model validity is achieved with the adoption of four goodness-of-fit techniques. In addition, the Quasi Monte Carlo and sparse grid methods are used as sampling and uncertainty computation tools, respectively. High resolution data resolution of solar irradiations for Adelaide city were used for this assessment, with an outcome indicating a satisfactory agreement between actual data variation and model.

  16. New aerial survey and hierarchical model to estimate manatee abundance

    Science.gov (United States)

    Langimm, Cahterine A.; Dorazio, Robert M.; Stith, Bradley M.; Doyle, Terry J.

    2011-01-01

    Monitoring the response of endangered and protected species to hydrological restoration is a major component of the adaptive management framework of the Comprehensive Everglades Restoration Plan. The endangered Florida manatee (Trichechus manatus latirostris) lives at the marine-freshwater interface in southwest Florida and is likely to be affected by hydrologic restoration. To provide managers with prerestoration information on distribution and abundance for postrestoration comparison, we developed and implemented a new aerial survey design and hierarchical statistical model to estimate and map abundance of manatees as a function of patch-specific habitat characteristics, indicative of manatee requirements for offshore forage (seagrass), inland fresh drinking water, and warm-water winter refuge. We estimated the number of groups of manatees from dual-observer counts and estimated the number of individuals within groups by removal sampling. Our model is unique in that we jointly analyzed group and individual counts using assumptions that allow probabilities of group detection to depend on group size. Ours is the first analysis of manatee aerial surveys to model spatial and temporal abundance of manatees in association with habitat type while accounting for imperfect detection. We conducted the study in the Ten Thousand Islands area of southwestern Florida, USA, which was expected to be affected by the Picayune Strand Restoration Project to restore hydrology altered for a failed real-estate development. We conducted 11 surveys in 2006, spanning the cold, dry season and warm, wet season. To examine short-term and seasonal changes in distribution we flew paired surveys 1–2 days apart within a given month during the year. Manatees were sparsely distributed across the landscape in small groups. Probability of detection of a group increased with group size; the magnitude of the relationship between group size and detection probability varied among surveys. Probability

  17. Re-evaluating neonatal-age models for ungulates: does model choice affect survival estimates?

    Directory of Open Access Journals (Sweden)

    Troy W Grovenburg

    Full Text Available New-hoof growth is regarded as the most reliable metric for predicting age of newborn ungulates, but variation in estimated age among hoof-growth equations that have been developed may affect estimates of survival in staggered-entry models. We used known-age newborns to evaluate variation in age estimates among existing hoof-growth equations and to determine the consequences of that variation on survival estimates. During 2001-2009, we captured and radiocollared 174 newborn (≤24-hrs old ungulates: 76 white-tailed deer (Odocoileus virginianus in Minnesota and South Dakota, 61 mule deer (O. hemionus in California, and 37 pronghorn (Antilocapra americana in South Dakota. Estimated age of known-age newborns differed among hoof-growth models and varied by >15 days for white-tailed deer, >20 days for mule deer, and >10 days for pronghorn. Accuracy (i.e., the proportion of neonates assigned to the correct age in aging newborns using published equations ranged from 0.0% to 39.4% in white-tailed deer, 0.0% to 3.3% in mule deer, and was 0.0% for pronghorns. Results of survival modeling indicated that variability in estimates of age-at-capture affected short-term estimates of survival (i.e., 30 days for white-tailed deer and mule deer, and survival estimates over a longer time frame (i.e., 120 days for mule deer. Conversely, survival estimates for pronghorn were not affected by estimates of age. Our analyses indicate that modeling survival in daily intervals is too fine a temporal scale when age-at-capture is unknown given the potential inaccuracies among equations used to estimate age of neonates. Instead, weekly survival intervals are more appropriate because most models accurately predicted ages within 1 week of the known age. Variation among results of neonatal-age models on short- and long-term estimates of survival for known-age young emphasizes the importance of selecting an appropriate hoof-growth equation and appropriately defining intervals (i

  18. Biochemical transport modeling, estimation, and detection in realistic environments

    Science.gov (United States)

    Ortner, Mathias; Nehorai, Arye

    2006-05-01

    Early detection and estimation of the spread of a biochemical contaminant are major issues for homeland security applications. We present an integrated approach combining the measurements given by an array of biochemical sensors with a physical model of the dispersion and statistical analysis to solve these problems and provide system performance measures. We approximate the dispersion model of the contaminant in a realistic environment through numerical simulations of reflected stochastic diffusions describing the microscopic transport phenomena due to wind and chemical diffusion using the Feynman-Kac formula. We consider arbitrary complex geometries and account for wind turbulence. Localizing the dispersive sources is useful for decontamination purposes and estimation of the cloud evolution. To solve the associated inverse problem, we propose a Bayesian framework based on a random field that is particularly powerful for localizing multiple sources with small amounts of measurements. We also develop a sequential detector using the numerical transport model we propose. Sequential detection allows on-line analysis and detecting wether a change has occurred. We first focus on the formulation of a suitable sequential detector that overcomes the presence of unknown parameters (e.g. release time, intensity and location). We compute a bound on the expected delay before false detection in order to decide the threshold of the test. For a fixed false-alarm rate, we obtain the detection probability of a substance release as a function of its location and initial concentration. Numerical examples are presented for two real-world scenarios: an urban area and an indoor ventilation duct.

  19. KONVERGENSI ESTIMATOR DALAM MODEL MIXTURE BERBASIS MISSING DATA

    Directory of Open Access Journals (Sweden)

    N Dwidayati

    2014-06-01

    Full Text Available Abstrak __________________________________________________________________________________________ Model mixture dapat mengestimasi proporsi pasien yang sembuh (cured dan fungsi survival pasien tak sembuh (uncured. Pada kajian ini, model mixture dikembangkan untuk  analisis cure rate berbasis missing data. Ada beberapa metode yang dapat digunakan untuk analisis missing data. Salah satu metode yang dapat digunakan adalah Algoritma EM, Metode ini didasarkan pada 2 (dua langkah, yaitu: (1 Expectation Step dan (2 Maximization Step. Algoritma EM merupakan pendekatan iterasi untuk mempelajari model dari data dengan nilai hilang melalui 4 (empat langkah, yaitu(1 pilih himpunan inisial dari parameter untuk sebuah model, (2 tentukan nilai ekspektasi untuk data hilang, (3 buat induksi parameter model baru dari gabungan nilai ekspekstasi dan data asli, dan (4 jika parameter tidak converged, ulangi langkah 2 menggunakan model baru. Berdasar kajian yang dilakukan dapat ditunjukkan bahwa pada algoritma EM, log-likelihood untuk missing data mengalami kenaikan setelah dilakukan setiap iterasi dari algoritmanya. Dengan demikian berdasar algoritma EM, barisan likelihood konvergen jika likelihood terbatas ke bawah.   Abstract __________________________________________________________________________________________ Model mixture can estimate proportion of recovering patient  and function of patient survival do not recover. At this study, model mixture developed to analyse cure rate bases on missing data. There are some method which applicable to analyse missing data. One of method which can be applied is Algoritma EM, This method based on 2 ( two step, that is: ( 1 Expectation Step and ( 2 Maximization Step. EM Algorithm is approach of iteration to study model from data with value loses through 4 ( four step, yaitu(1 select;chooses initial gathering from parameter for a model, ( 2 determines expectation value for data to lose, ( 3 induce newfangled parameter

  20. IMPROVEMENT OF MATHEMATICAL MODELS FOR ESTIMATION OF TRAIN DYNAMICS

    Directory of Open Access Journals (Sweden)

    L. V. Ursulyak

    2017-12-01

    Full Text Available Purpose. Using scientific publications the paper analyzes the mathematical models developed in Ukraine, CIS countries and abroad for theoretical studies of train dynamics and also shows the urgency of their further improvement. Methodology. Information base of the research was official full-text and abstract databases, scientific works of domestic and foreign scientists, professional periodicals, materials of scientific and practical conferences, methodological materials of ministries and departments. Analysis of publications on existing mathematical models used to solve a wide range of problems associated with the train dynamics study shows the expediency of their application. Findings. The results of these studies were used in: 1 design of new types of draft gears and air distributors; 2 development of methods for controlling the movement of conventional and connected trains; 3 creation of appropriate process flow diagrams; 4 development of energy-saving methods of train driving; 5 revision of the Construction Codes and Regulations (SNiP ΙΙ-39.76; 6 when selecting the parameters of the autonomous automatic control system, created in DNURT, for an auxiliary locomotive that is part of a connected train; 7 when creating computer simulators for the training of locomotive drivers; 8 assessment of the vehicle dynamic indices characterizing traffic safety. Scientists around the world conduct numerical experiments related to estimation of train dynamics using mathematical models that need to be constantly improved. Originality. The authors presented the main theoretical postulates that allowed them to develop the existing mathematical models for solving problems related to the train dynamics. The analysis of scientific articles published in Ukraine, CIS countries and abroad allows us to determine the most relevant areas of application of mathematical models. Practicalvalue. The practical value of the results obtained lies in the scientific validity

  1. The modular modality frame model: continuous body state estimation and plausibility-weighted information fusion.

    Science.gov (United States)

    Ehrenfeld, Stephan; Butz, Martin V

    2013-02-01

    Humans show admirable capabilities in movement planning and execution. They can perform complex tasks in various contexts, using the available sensory information very effectively. Body models and continuous body state estimations appear necessary to realize such capabilities. We introduce the Modular Modality Frame (MMF) model, which maintains a highly distributed, modularized body model continuously updating, modularized probabilistic body state estimations over time. Modularization is realized with respect to modality frames, that is, sensory modalities in particular frames of reference and with respect to particular body parts. We evaluate MMF performance on a simulated, nine degree of freedom arm in 3D space. The results show that MMF is able to maintain accurate body state estimations despite high sensor and motor noise. Moreover, by comparing the sensory information available in different modality frames, MMF can identify faulty sensory measurements on the fly. In the near future, applications to lightweight robot control should be pursued. Moreover, MMF may be enhanced with neural encodings by introducing neural population codes and learning techniques. Finally, more dexterous goal-directed behavior should be realized by exploiting the available redundant state representations.

  2. Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

    Science.gov (United States)

    Goran Stahl; Svetlana Saarela; Sebastian Schnell; Soren Holm; Johannes Breidenbach; Sean P. Healey; Paul L. Patterson; Steen Magnussen; Erik Naesset; Ronald E. McRoberts; Timothy G. Gregoire

    2016-01-01

    This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where...

  3. Estimation of intermediate-grade uranium resources II. Proposed method for estimating intermediate-grade uranium resources in roll-front deposits. Final report

    International Nuclear Information System (INIS)

    Lambie, F.W.; Yee, S.N.

    1981-09-01

    The purpose of this and a previous project was to examine the feasibility of estimating intermediate grade uranium (0.01 to 0.05% U 3 O 8 ) on the basis of existing, sparsely drilled holes. All data are from the Powder River Basin in Wyoming. DOE makes preliminary estimates of endowment by calculating an Average Area of Influence (AAI) based on densely drilled areas, multiplying that by the thickness of the mineralization and then dividing by a tonnage factor. The resulting tonnage of ore is then multiplied by the average grade of the interval to obtain the estimate of U 3 O 8 tonnage. Total endowment is the sum of these values over all mineralized intervals in all wells in the area. In regions where wells are densely drilled and approximately regularly spaced this technique approaches the classical polygonal estimation technique used to estimate ore reserves and should be fairly reliable. The method is conservative because: (1) in sparsely drilled regions a large fraction of the area is not considered to contribute to endowment; (2) there is a bias created by the different distributions of point grades and mining block grades. A conservative approach may be justified for purposes of ore reserve estimation, where large investments may hinge on local forecasts. But for estimates of endowment over areas as large as 1 0 by 2 0 quadrangles, or the nation as a whole, errors in local predictions are not critical as long as they tend to cancel and a less conservative estimation approach may be justified.One candidate, developed for this study and described is called the contoured thickness technique. A comparison of estimates based on the contoured thickness approach with DOE calculations for five areas of Wyoming roll-fronts in the Powder River Basin is presented. The sensitivity of the technique to well density is examined and the question of predicting intermediate grade endowment from data on higher grades is discussed

  4. Model-Assisted Estimation of Tropical Forest Biomass Change: A Comparison of Approaches

    Directory of Open Access Journals (Sweden)

    Nikolai Knapp

    2018-05-01

    Full Text Available Monitoring of changes in forest biomass requires accurate transfer functions between remote sensing-derived changes in canopy height (ΔH and the actual changes in aboveground biomass (ΔAGB. Different approaches can be used to accomplish this task: direct approaches link ΔH directly to ΔAGB, while indirect approaches are based on deriving AGB stock estimates for two points in time and calculating the difference. In some studies, direct approaches led to more accurate estimations, while, in others, indirect approaches led to more accurate estimations. It is unknown how each approach performs under different conditions and over the full range of possible changes. Here, we used a forest model (FORMIND to generate a large dataset (>28,000 ha of natural and disturbed forest stands over time. Remote sensing of forest height was simulated on these stands to derive canopy height models for each time step. Three approaches for estimating ΔAGB were compared: (i the direct approach; (ii the indirect approach and (iii an enhanced direct approach (dir+tex, using ΔH in combination with canopy texture. Total prediction accuracies of the three approaches measured as root mean squared errors (RMSE were RMSEdirect = 18.7 t ha−1, RMSEindirect = 12.6 t ha−1 and RMSEdir+tex = 12.4 t ha−1. Further analyses revealed height-dependent biases in the ΔAGB estimates of the direct approach, which did not occur with the other approaches. Finally, the three approaches were applied on radar-derived (TanDEM-X canopy height changes on Barro Colorado Island (Panama. The study demonstrates the potential of forest modeling for improving the interpretation of changes observed in remote sensing data and for comparing different methodologies.

  5. Estimating Structural Models of Corporate Bond Prices in Indonesian Corporations

    Directory of Open Access Journals (Sweden)

    Lenny Suardi

    2014-08-01

    Full Text Available This  paper  applies  the  maximum  likelihood  (ML  approaches  to  implementing  the structural  model  of  corporate  bond,  as  suggested  by  Li  and  Wong  (2008,  in  Indonesian corporations.  Two  structural  models,  extended  Merton  and  Longstaff  &  Schwartz  (LS models,  are  used  in  determining  these  prices,  yields,  yield  spreads  and  probabilities  of default. ML estimation is used to determine the volatility of irm value. Since irm value is unobserved variable, Duan (1994 suggested that the irst step of ML estimation is to derive the likelihood function for equity as the option on the irm value. The second step is to ind parameters such as the drift and volatility of irm value, that maximizing this function. The irm value itself is extracted by equating the pricing formula to the observed equity prices. Equity,  total  liabilities,  bond  prices  data  and  the  irm's  parameters  (irm  value,  volatility of irm value, and default barrier are substituted to extended Merton and LS bond pricing formula in order to valuate the corporate bond.These models are implemented to a sample of 24 bond prices in Indonesian corporation during  period  of  2001-2005,  based  on  criteria  of  Eom,  Helwege  and  Huang  (2004.  The equity  and  bond  prices  data  were  obtained  from  Indonesia  Stock  Exchange  for  irms  that issued equity and provided regular inancial statement within this period. The result shows that both models, in average, underestimate the bond prices and overestimate the yields and yield spread. ";} // -->activate javascript

  6. Vibration Stabilization of a Mechanical Model of a X-Band Linear Collider Final Focus Magnet

    CERN Document Server

    Frisch, J; Decker, V; Hendrickson, L; Markiewicz, T W; Partridge, R; Seryi, Andrei

    2004-01-01

    The small beam sizes at the interaction point of a X-band linear collider require mechanical stabilization of the final focus magnets at the nanometer level. While passive systems provide adequate performance at many potential sites, active mechanical stabilization is useful if the natural or cultural ground vibration is higher than expected. A mechanical model of a room temperature linear collider final focus magnet has been constructed and actively stabilized with an accelerometer based system.

  7. Vibration Stabilization of a Mechanical Model of a X-Band Linear Collider Final Focus Magnet

    International Nuclear Information System (INIS)

    Frisch, Josef; Chang, Allison; Decker, Valentin; Doyle, Eric; Eriksson, Leif; Hendrickson, Linda; Himel, Thomas; Markiewicz, Thomas; Partridge, Richard; Seryi, Andrei; SLAC

    2006-01-01

    The small beam sizes at the interaction point of a X-band linear collider require mechanical stabilization of the final focus magnets at the nanometer level. While passive systems provide adequate performance at many potential sites, active mechanical stabilization is useful if the natural or cultural ground vibration is higher than expected. A mechanical model of a room temperature linear collider final focus magnet has been constructed and actively stabilized with an accelerometer based system

  8. Estimating model parameters in nonautonomous chaotic systems using synchronization

    International Nuclear Information System (INIS)

    Yang, Xiaoli; Xu, Wei; Sun, Zhongkui

    2007-01-01

    In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation

  9. Applicability of genetic algorithms to parameter estimation of economic models

    Directory of Open Access Journals (Sweden)

    Marcel Ševela

    2004-01-01

    Full Text Available The paper concentrates on capability of genetic algorithms for parameter estimation of non-linear economic models. In the paper we test the ability of genetic algorithms to estimate of parameters of demand function for durable goods and simultaneously search for parameters of genetic algorithm that lead to maximum effectiveness of the computation algorithm. The genetic algorithms connect deterministic iterative computation methods with stochastic methods. In the genteic aůgorithm approach each possible solution is represented by one individual, those life and lifes of all generations of individuals run under a few parameter of genetic algorithm. Our simulations resulted in optimal mutation rate of 15% of all bits in chromosomes, optimal elitism rate 20%. We can not set the optimal extend of generation, because it proves positive correlation with effectiveness of genetic algorithm in all range under research, but its impact is degreasing. The used genetic algorithm was sensitive to mutation rate at most, than to extend of generation. The sensitivity to elitism rate is not so strong.

  10. Parameter estimation and hypothesis testing in linear models

    CERN Document Server

    Koch, Karl-Rudolf

    1999-01-01

    The necessity to publish the second edition of this book arose when its third German edition had just been published. This second English edition is there­ fore a translation of the third German edition of Parameter Estimation and Hypothesis Testing in Linear Models, published in 1997. It differs from the first English edition by the addition of a new chapter on robust estimation of parameters and the deletion of the section on discriminant analysis, which has been more completely dealt with by the author in the book Bayesian In­ ference with Geodetic Applications, Springer-Verlag, Berlin Heidelberg New York, 1990. Smaller additions and deletions have been incorporated, to im­ prove the text, to point out new developments or to eliminate errors which became apparent. A few examples have been also added. I thank Springer-Verlag for publishing this second edition and for the assistance in checking the translation, although the responsibility of errors remains with the author. I also want to express my thanks...

  11. The model for estimation production cost of embroidery handicraft

    Science.gov (United States)

    Nofierni; Sriwana, IK; Septriani, Y.

    2017-12-01

    Embroidery industry is one of type of micro industry that produce embroidery handicraft. These industries are emerging in some rural areas of Indonesia. Embroidery clothing are produce such as scarves and clothes that show cultural value of certain region. The owner of an enterprise must calculate the cost of production before making a decision on how many products are received from the customer. A calculation approach to production cost analysis is needed to consider the feasibility of each order coming. This study is proposed to design the expert system (ES) in order to improve production management in the embroidery industry. The model will design used Fuzzy inference system as a model to estimate production cost. Research conducted based on survey and knowledge acquisitions from stakeholder of supply chain embroidery handicraft industry at Bukittinggi, West Sumatera, Indonesia. This paper will use fuzzy input where the quality, the complexity of the design and the working hours required and the result of the model are useful to manage production cost on embroidery production.

  12. Bayesian parameter estimation for stochastic models of biological cell migration

    Science.gov (United States)

    Dieterich, Peter; Preuss, Roland

    2013-08-01

    Cell migration plays an essential role under many physiological and patho-physiological conditions. It is of major importance during embryonic development and wound healing. In contrast, it also generates negative effects during inflammation processes, the transmigration of tumors or the formation of metastases. Thus, a reliable quantification and characterization of cell paths could give insight into the dynamics of these processes. Typically stochastic models are applied where parameters are extracted by fitting models to the so-called mean square displacement of the observed cell group. We show that this approach has several disadvantages and problems. Therefore, we propose a simple procedure directly relying on the positions of the cell's trajectory and the covariance matrix of the positions. It is shown that the covariance is identical with the spatial aging correlation function for the supposed linear Gaussian models of Brownian motion with drift and fractional Brownian motion. The technique is applied and illustrated with simulated data showing a reliable parameter estimation from single cell paths.

  13. A financial planning model for estimating hospital debt capacity.

    Science.gov (United States)

    Hopkins, D S; Heath, D; Levin, P J

    1982-01-01

    A computer-based financial planning model was formulated to measure the impact of a major capital improvement project on the fiscal health of Stanford University Hospital. The model had to be responsive to many variables and easy to use, so as to allow for the testing of numerous alternatives. Special efforts were made to identify the key variables that needed to be presented in the model and to include all known links between capital investment, debt, and hospital operating expenses. Growth in the number of patient days of care was singled out as a major source of uncertainty that would have profound effects on the hospital's finances. Therefore this variable was subjected to special scrutiny in terms of efforts to gauge expected demographic trends and market forces. In addition, alternative base runs of the model were made under three distinct patient-demand assumptions. Use of the model enabled planners at the Stanford University Hospital (a) to determine that a proposed modernization plan was financially feasible under a reasonable (that is, not unduly optimistic) set of assumptions and (b) to examine the major sources of risk. Other than patient demand, these sources were found to be gross revenues per patient, operating costs, and future limitations on government reimbursement programs. When the likely financial consequences of these risks were estimated, both separately and in combination, it was determined that even if two or more assumptions took a somewhat more negative turn than was expected, the hospital would be able to offset adverse consequences by a relatively minor reduction in operating costs. PMID:7111658

  14. Model Specifications for Estimating Labor Market Returns to Associate Degrees: How Robust Are Fixed Effects Estimates? A CAPSEE Working Paper

    Science.gov (United States)

    Belfield, Clive; Bailey, Thomas

    2017-01-01

    Recently, studies have adopted fixed effects modeling to identify the returns to college. This method has the advantage over ordinary least squares estimates in that unobservable, individual-level characteristics that may bias the estimated returns are differenced out. But the method requires extensive longitudinal data and involves complex…

  15. Measurement Model Nonlinearity in Estimation of Dynamical Systems

    Science.gov (United States)

    Majji, Manoranjan; Junkins, J. L.; Turner, J. D.

    2012-06-01

    The role of nonlinearity of the measurement model and its interactions with the uncertainty of measurements and geometry of the problem is studied in this paper. An examination of the transformations of the probability density function in various coordinate systems is presented for several astrodynamics applications. Smooth and analytic nonlinear functions are considered for the studies on the exact transformation of uncertainty. Special emphasis is given to understanding the role of change of variables in the calculus of random variables. The transformation of probability density functions through mappings is shown to provide insight in to understanding the evolution of uncertainty in nonlinear systems. Examples are presented to highlight salient aspects of the discussion. A sequential orbit determination problem is analyzed, where the transformation formula provides useful insights for making the choice of coordinates for estimation of dynamic systems.

  16. FASTSim: A Model to Estimate Vehicle Efficiency, Cost and Performance

    Energy Technology Data Exchange (ETDEWEB)

    Brooker, A.; Gonder, J.; Wang, L.; Wood, E.; Lopp, S.; Ramroth, L.

    2015-05-04

    The Future Automotive Systems Technology Simulator (FASTSim) is a high-level advanced vehicle powertrain systems analysis tool supported by the U.S. Department of Energy’s Vehicle Technologies Office. FASTSim provides a quick and simple approach to compare powertrains and estimate the impact of technology improvements on light- and heavy-duty vehicle efficiency, performance, cost, and battery batches of real-world drive cycles. FASTSim’s calculation framework and balance among detail, accuracy, and speed enable it to simulate thousands of driven miles in minutes. The key components and vehicle outputs have been validated by comparing the model outputs to test data for many different vehicles to provide confidence in the results. A graphical user interface makes FASTSim easy and efficient to use. FASTSim is freely available for download from the National Renewable Energy Laboratory’s website (see www.nrel.gov/fastsim).

  17. House thermal model parameter estimation method for Model Predictive Control applications

    NARCIS (Netherlands)

    van Leeuwen, Richard Pieter; de Wit, J.B.; Fink, J.; Smit, Gerardus Johannes Maria

    In this paper we investigate thermal network models with different model orders applied to various Dutch low-energy house types with high and low interior thermal mass and containing floor heating. Parameter estimations are performed by using data from TRNSYS simulations. The paper discusses results

  18. Estimation of unemployment rates using small area estimation model by combining time series and cross-sectional data

    Science.gov (United States)

    Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan

    2016-02-01

    Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.

  19. Risk assessment for the Yucca Mountain high-level nuclear waste repository site: Estimation of volcanic disruption. Final report

    International Nuclear Information System (INIS)

    Ho, Chih-Hsiang.

    1992-01-01

    In this article, we model the volcanism near the proposed nuclear waste repository at Yucca Mountain, Nevada, U.S.A. by estimating the instantaneous recurrence rate using a nonhomogeneous Poisson process with Weibull intensity and by using a homogeneous Poisson process to predict future eruptions. We then quantify the probability that any single eruption is disruptive in terms of a (prior) probability distribution, since not every eruption would result in disruption of the repository. Bayesian analysis is performed to evaluate the volcanic risk. Based on the Quaternary data, a 90% confidence interval for the instantaneous recurrence rate near the Yucca Mountain site is (1.85 x 10 -6 /yr, 1.26 x 10 -5 /yr). Also, using these confidence bounds, the corresponding 90% confidence interval for the risk (probability of at least one disruptive eruption) for an isolation time of 10 4 years is (1.0 x 10 -3 , 6.7 x 10 -3 ), if it is assumed that the intensity remains constant during the projected time frame

  20. Data Sources for the Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Science.gov (United States)

    The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).

  1. Data Sources for the Model-based Small Area Estimates of Cancer-Related Knowledge - Small Area Estimates

    Science.gov (United States)

    The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).

  2. Uncertainty estimation of the velocity model for the TrigNet GPS network

    Science.gov (United States)

    Hackl, Matthias; Malservisi, Rocco; Hugentobler, Urs; Wonnacott, Richard

    2010-05-01

    Satellite based geodetic techniques - above all GPS - provide an outstanding tool to measure crustal motions. They are widely used to derive geodetic velocity models that are applied in geodynamics to determine rotations of tectonic blocks, to localize active geological features, and to estimate rheological properties of the crust and the underlying asthenosphere. However, it is not a trivial task to derive GPS velocities and their uncertainties from positioning time series. In general time series are assumed to be represented by linear models (sometimes offsets, annual, and semi-annual signals are included) and noise. It has been shown that models accounting only for white noise tend to underestimate the uncertainties of rates derived from long time series and that different colored noise components (flicker noise, random walk, etc.) need to be considered. However, a thorough error analysis including power spectra analyses and maximum likelihood estimates is quite demanding and are usually not carried out for every site, but the uncertainties are scaled by latitude dependent factors. Analyses of the South Africa continuous GPS network TrigNet indicate that the scaled uncertainties overestimate the velocity errors. So we applied a method similar to the Allan Variance that is commonly used in the estimation of clock uncertainties and is able to account for time dependent probability density functions (colored noise) to the TrigNet time series. Finally, we compared these estimates to the results obtained by spectral analyses using CATS. Comparisons with synthetic data show that the noise can be represented quite well by a power law model in combination with a seasonal signal in agreement with previous studies.

  3. A matlab framework for estimation of NLME models using stochastic differential equations: applications for estimation of insulin secretion rates.

    Science.gov (United States)

    Mortensen, Stig B; Klim, Søren; Dammann, Bernd; Kristensen, Niels R; Madsen, Henrik; Overgaard, Rune V

    2007-10-01

    The non-linear mixed-effects model based on stochastic differential equations (SDEs) provides an attractive residual error model, that is able to handle serially correlated residuals typically arising from structural mis-specification of the true underlying model. The use of SDEs also opens up for new tools for model development and easily allows for tracking of unknown inputs and parameters over time. An algorithm for maximum likelihood estimation of the model has earlier been proposed, and the present paper presents the first general implementation of this algorithm. The implementation is done in Matlab and also demonstrates the use of parallel computing for improved estimation times. The use of the implementation is illustrated by two examples of application which focus on the ability of the model to estimate unknown inputs facilitated by the extension to SDEs. The first application is a deconvolution-type estimation of the insulin secretion rate based on a linear two-compartment model for C-peptide measurements. In the second application the model is extended to also give an estimate of the time varying liver extraction based on both C-peptide and insulin measurements.

  4. Are individual based models a suitable approach to estimate population vulnerability? - a case study

    Directory of Open Access Journals (Sweden)

    Eva Maria Griebeler

    2011-04-01

    Full Text Available European populations of the Large Blue Butterfly Maculinea arion have experienced severe declines in the last decades, especially in the northern part of the species range. This endangered lycaenid butterfly needs two resources for development: flower buds of specific plants (Thymus spp., Origanum vulgare, on which young caterpillars briefly feed, and red ants of the genus Myrmica, whose nests support caterpillars during a prolonged final instar. I present an analytically solvable deterministic model to estimate the vulnerability of populations of M. arion. Results obtained from the sensitivity analysis of this mathematical model (MM are contrasted to the respective results that had been derived from a spatially explicit individual based model (IBM for this butterfly. I demonstrate that details in landscape configuration which are neglected by the MM but are easily taken into consideration by the IBM result in a different degree of intraspecific competition of caterpillars on flower buds and within host ant nests. The resulting differences in mortalities of caterpillars lead to erroneous estimates of the extinction risk of a butterfly population living in habitat with low food plant coverage and low abundance in host ant nests. This observation favors the use of an individual based modeling approach over the deterministic approach at least for the management of this threatened butterfly.

  5. Estimating Risk of Natural Gas Portfolios by Using GARCH-EVT-Copula Model.

    Science.gov (United States)

    Tang, Jiechen; Zhou, Chao; Yuan, Xinyu; Sriboonchitta, Songsak

    2015-01-01

    This paper concentrates on estimating the risk of Title Transfer Facility (TTF) Hub natural gas portfolios by using the GARCH-EVT-copula model. We first use the univariate ARMA-GARCH model to model each natural gas return series. Second, the extreme value distribution (EVT) is fitted to the tails of the residuals to model marginal residual distributions. Third, multivariate Gaussian copula and Student t-copula are employed to describe the natural gas portfolio risk dependence structure. Finally, we simulate N portfolios and estimate value at risk (VaR) and conditional value at risk (CVaR). Our empirical results show that, for an equally weighted portfolio of five natural gases, the VaR and CVaR values obtained from the Student t-copula are larger than those obtained from the Gaussian copula. Moreover, when minimizing the portfolio risk, the optimal natural gas portfolio weights are found to be similar across the multivariate Gaussian copula and Student t-copula and different confidence levels.

  6. Estimating Risk of Natural Gas Portfolios by Using GARCH-EVT-Copula Model

    Directory of Open Access Journals (Sweden)

    Jiechen Tang

    2015-01-01

    Full Text Available This paper concentrates on estimating the risk of Title Transfer Facility (TTF Hub natural gas portfolios by using the GARCH-EVT-copula model. We first use the univariate ARMA-GARCH model to model each natural gas return series. Second, the extreme value distribution (EVT is fitted to the tails of the residuals to model marginal residual distributions. Third, multivariate Gaussian copula and Student t-copula are employed to describe the natural gas portfolio risk dependence structure. Finally, we simulate N portfolios and estimate value at risk (VaR and conditional value at risk (CVaR. Our empirical results show that, for an equally weighted portfolio of five natural gases, the VaR and CVaR values obtained from the Student t-copula are larger than those obtained from the Gaussian copula. Moreover, when minimizing the portfolio risk, the optimal natural gas portfolio weights are found to be similar across the multivariate Gaussian copula and Student t-copula and different confidence levels.

  7. Estimation Model of Spacecraft Parameters and Cost Based on a Statistical Analysis of COMPASS Designs

    Science.gov (United States)

    Gerberich, Matthew W.; Oleson, Steven R.

    2013-01-01

    The Collaborative Modeling for Parametric Assessment of Space Systems (COMPASS) team at Glenn Research Center has performed integrated system analysis of conceptual spacecraft mission designs since 2006 using a multidisciplinary concurrent engineering process. The set of completed designs was archived in a database, to allow for the study of relationships between design parameters. Although COMPASS uses a parametric spacecraft costing model, this research investigated the possibility of using a top-down approach to rapidly estimate the overall vehicle costs. This paper presents the relationships between significant design variables, including breakdowns of dry mass, wet mass, and cost. It also develops a model for a broad estimate of these parameters through basic mission characteristics, including the target location distance, the payload mass, the duration, the delta-v requirement, and the type of mission, propulsion, and electrical power. Finally, this paper examines the accuracy of this model in regards to past COMPASS designs, with an assessment of outlying spacecraft, and compares the results to historical data of completed NASA missions.

  8. PARAMETER ESTIMATION IN NON-HOMOGENEOUS BOOLEAN MODELS: AN APPLICATION TO PLANT DEFENSE RESPONSE

    Directory of Open Access Journals (Sweden)

    Maria Angeles Gallego

    2014-11-01

    Full Text Available Many medical and biological problems require to extract information from microscopical images. Boolean models have been extensively used to analyze binary images of random clumps in many scientific fields. In this paper, a particular type of Boolean model with an underlying non-stationary point process is considered. The intensity of the underlying point process is formulated as a fixed function of the distance to a region of interest. A method to estimate the parameters of this Boolean model is introduced, and its performance is checked in two different settings. Firstly, a comparative study with other existent methods is done using simulated data. Secondly, the method is applied to analyze the longleaf data set, which is a very popular data set in the context of point processes included in the R package spatstat. Obtained results show that the new method provides as accurate estimates as those obtained with more complex methods developed for the general case. Finally, to illustrate the application of this model and this method, a particular type of phytopathological images are analyzed. These images show callose depositions in leaves of Arabidopsis plants. The analysis of callose depositions, is very popular in the phytopathological literature to quantify activity of plant immunity.

  9. A Modified Model to Estimate Building Rental Multipiers Accounting for Advalorem Operating Expenses

    Directory of Open Access Journals (Sweden)

    Smolyak S.A.

    2016-09-01

    Full Text Available To develop ideas on building element valuation contained in the first article on the subject published in REMV, we propose an elaboration of the approach accounting for ad valorem expenses incidental to property management, such as land taxes, income/capital gains tax, and insurance premium costs; all such costs, being of an ad valorem nature in the first instance, cause circularity in the logic of the model, which, however, is not intractable under the proposed approach. The resulting formulas for carrying out practical estimation of building rental multipliers and, in consequence, of building values, turn out to be somewhat modified, and we demonstrate the sensitivity of the developed approach to the impact of these ad valorem factors. On the other hand, it is demonstrated that (accounting for building depreciation charges, which should seemingly be included among the considered ad valorem factors, cancel out and do not have any impact on the resulting estimates. However, treating the depreciation of buildings in quantifiable economic terms as a reduction in derivable operating benefits over time (instead of mere physical indications, such as age, we also demonstrate that the approach has implications for estimating the economic service lives of buildings and can be practical when used in conjunction with the market-related approach to valuation – from which the requisite model inputs can be extracted as shown in the final part of the paper.

  10. Efficient semiparametric estimation in generalized partially linear additive models for longitudinal/clustered data

    KAUST Repository

    Cheng, Guang; Zhou, Lan; Huang, Jianhua Z.

    2014-01-01

    We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based

  11. Model-free approach to the estimation of radiation hazards. I. Theory

    International Nuclear Information System (INIS)

    Zaider, M.; Brenner, D.J.

    1986-01-01

    The experience of the Japanese atomic bomb survivors constitutes to date the major data base for evaluating the effects of low doses of ionizing radiation on human populations. Although numerous analyses have been performed and published concerning this experience, it is clear that no consensus has emerged as to the conclusions that may be drawn to assist in setting realistic radiation protection guidelines. In part this is an inherent consequences of the rather limited amount of data available. In this paper the authors address an equally important problem; namely, the use of arbitrary parametric risk models which have little theoretical foundation, yet almost totally determine the final conclusions drawn. They propose the use of a model-free approach to the estimation of radiation hazards

  12. Estimating Spoken Dialog System Quality with User Models

    CERN Document Server

    Engelbrecht, Klaus-Peter

    2013-01-01

    Spoken dialog systems have the potential to offer highly intuitive user interfaces, as they allow systems to be controlled using natural language. However, the complexity inherent in natural language dialogs means that careful testing of the system must be carried out from the very beginning of the design process.   This book examines how user models can be used to support such early evaluations in two ways:  by running simulations of dialogs, and by estimating the quality judgments of users. First, a design environment supporting the creation of dialog flows, the simulation of dialogs, and the analysis of the simulated data is proposed.  How the quality of user simulations may be quantified with respect to their suitability for both formative and summative evaluation is then discussed. The remainder of the book is dedicated to the problem of predicting quality judgments of users based on interaction data. New modeling approaches are presented, which process the dialogs as sequences, and which allow knowl...

  13. THE QUANTITATIVE MODEL OF THE FINALIZATIONS IN MEN’S COMPETITIVE HANDBALL AND THEIR EFFICIENCY

    Directory of Open Access Journals (Sweden)

    Eftene Alexandru

    2009-10-01

    Full Text Available In the epistemic steps, we approach a competitive performance behavior model build after a quantitativeanalysis of certain data collected from the official International Handball Federation protocols on theperformance of the first four teams of the World Men's Handball Championship - Croatia 2009, duringsemifinals and finals.This model is a part of the integrative (global model of the handball game, which will be graduallyinvestigated during the following research.I have started the construction of this model from the premise that the finalization represents theessence of the game.The components of our model, in a prioritized order: shot at the goal from 9m- 15p; shot at the goalfrom 6m- 12p; shot at the goal from 7m- 12p; fast break shot at the goal - 11,5p; wing shot at the goal - 8,5p;penetration shot at the goal - 7p;

  14. Transient Inverse Calibration of Site-Wide Groundwater Model to Hanford Operational Impacts from 1943 to 1996-Alternative Conceptual Model Considering Interaction with Uppermost Basalt Confined Aquifer; FINAL

    International Nuclear Information System (INIS)

    Vermeul, Vince R; Cole, Charles R; Bergeron, Marcel P; Thorne, Paul D; Wurstner, Signe K

    2001-01-01

    The baseline three-dimensional transient inverse model for the estimation of site-wide scale flow parameters, including their uncertainties, using data on the transient behavior of the unconfined aquifer system over the entire historical period of Hanford operations, has been modified to account for the effects of basalt intercommunication between the Hanford unconfined aquifer and the underlying upper basalt confined aquifer. Both the baseline and alternative conceptual models (ACM-1) considered only the groundwater flow component and corresponding observational data in the 3-Dl transient inverse calibration efforts. Subsequent efforts will examine both groundwater flow and transport. Comparisons of goodness of fit measures and parameter estimation results for the ACM-1 transient inverse calibrated model with those from previous site-wide groundwater modeling efforts illustrate that the new 3-D transient inverse model approach will strengthen the technical defensibility of the final model(s) and provide the ability to incorporate uncertainty in predictions related to both conceptual model and parameter uncertainty

  15. Segmentation and intensity estimation of microarray images using a gamma-t mixture model.

    Science.gov (United States)

    Baek, Jangsun; Son, Young Sook; McLachlan, Geoffrey J

    2007-02-15

    We present a new approach to the analysis of images for complementary DNA microarray experiments. The image segmentation and intensity estimation are performed simultaneously by adopting a two-component mixture model. One component of this mixture corresponds to the distribution of the background intensity, while the other corresponds to the distribution of the foreground intensity. The intensity measurement is a bivariate vector consisting of red and green intensities. The background intensity component is modeled by the bivariate gamma distribution, whose marginal densities for the red and green intensities are independent three-parameter gamma distributions with different parameters. The foreground intensity component is taken to be the bivariate t distribution, with the constraint that the mean of the foreground is greater than that of the background for each of the two colors. The degrees of freedom of this t distribution are inferred from the data but they could be specified in advance to reduce the computation time. Also, the covariance matrix is not restricted to being diagonal and so it allows for nonzero correlation between R and G foreground intensities. This gamma-t mixture model is fitted by maximum likelihood via the EM algorithm. A final step is executed whereby nonparametric (kernel) smoothing is undertaken of the posterior probabilities of component membership. The main advantages of this approach are: (1) it enjoys the well-known strengths of a mixture model, namely flexibility and adaptability to the data; (2) it considers the segmentation and intensity simultaneously and not separately as in commonly used existing software, and it also works with the red and green intensities in a bivariate framework as opposed to their separate estimation via univariate methods; (3) the use of the three-parameter gamma distribution for the background red and green intensities provides a much better fit than the normal (log normal) or t distributions; (4) the

  16. Stability Analysis for Li-Ion Battery Model Parameters and State of Charge Estimation by Measurement Uncertainty Consideration

    Directory of Open Access Journals (Sweden)

    Shifei Yuan

    2015-07-01

    Full Text Available Accurate estimation of model parameters and state of charge (SoC is crucial for the lithium-ion battery management system (BMS. In this paper, the stability of the model parameters and SoC estimation under measurement uncertainty is evaluated by three different factors: (i sampling periods of 1/0.5/0.1 s; (ii current sensor precisions of ±5/±50/±500 mA; and (iii voltage sensor precisions of ±1/±2.5/±5 mV. Firstly, the numerical model stability analysis and parametric sensitivity analysis for battery model parameters are conducted under sampling frequency of 1–50 Hz. The perturbation analysis is theoretically performed of current/voltage measurement uncertainty on model parameter variation. Secondly, the impact of three different factors on the model parameters and SoC estimation was evaluated with the federal urban driving sequence (FUDS profile. The bias correction recursive least square (CRLS and adaptive extended Kalman filter (AEKF algorithm were adopted to estimate the model parameters and SoC jointly. Finally, the simulation results were compared and some insightful findings were concluded. For the given battery model and parameter estimation algorithm, the sampling period, and current/voltage sampling accuracy presented a non-negligible effect on the estimation results of model parameters. This research revealed the influence of the measurement uncertainty on the model parameter estimation, which will provide the guidelines to select a reasonable sampling period and the current/voltage sensor sampling precisions in engineering applications.

  17. Estimation of Rainfall Associated with Typhoons over the Ocean Using TRMM/TMI and Numerical Models

    Directory of Open Access Journals (Sweden)

    Nan-Ching Yeh

    2015-11-01

    Full Text Available This study quantitatively estimated the precipitation associated with a typhoon in the northwestern Pacific Ocean by using a physical algorithm which included the Weather Research and Forecasting model, Radiative Transfer for TIROS Operational Vertical Sounder model, and data from the Tropical Rainfall Measuring Mission (TRMM/TRMM Microwave Imager (TMI and TRMM/Precipitation Radar (PR. First, a prior probability distribution function (PDF was constructed using over three million rain rate retrievals from the TRMM/PR data for the period 2002–2010 over the northwestern Pacific Ocean. Subsequently, brightness temperatures for 15 typhoons that occurred over the northwestern Pacific Ocean were simulated using a microwave radiative transfer model and a conditional PDF was obtained for these typhoons. The aforementioned physical algorithm involved using a posterior PDF. A posterior PDF was obtained by combining the prior and conditional PDFs. Finally, the rain rate associated with a typhoon was estimated by inputting the observations of the TMI (attenuation indices at 10, 19, 37 GHz into the posterior PDF (lookup table. Results based on rain rate retrievals indicated that rainband locations with the heaviest rainfall showed qualitatively similar horizontal distributions. The correlation coefficient and root-mean-square error of the rain rate estimation were 0.63 and 4.45 mm·h−1, respectively. Furthermore, the correlation coefficient and root-mean-square error for convective rainfall were 0.78 and 7.25 mm·h−1, respectively, and those for stratiform rainfall were 0.58 and 9.60 mm·h−1, respectively. The main contribution of this study is introducing an approach to quickly and accurately estimate the typhoon precipitation, and remove the need for complex calculations.

  18. Effects of uncertainty in model predictions of individual tree volume on large area volume estimates

    Science.gov (United States)

    Ronald E. McRoberts; James A. Westfall

    2014-01-01

    Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...

  19. "A space-time ensemble Kalman filter for state and parameter estimation of groundwater transport models"

    Science.gov (United States)

    Briseño, Jessica; Herrera, Graciela S.

    2010-05-01

    Herrera (1998) proposed a method for the optimal design of groundwater quality monitoring networks that involves space and time in a combined form. The method was applied later by Herrera et al (2001) and by Herrera and Pinder (2005). To get the estimates of the contaminant concentration being analyzed, this method uses a space-time ensemble Kalman filter, based on a stochastic flow and transport model. When the method is applied, it is important that the characteristics of the stochastic model be congruent with field data, but, in general, it is laborious to manually achieve a good match between them. For this reason, the main objective of this work is to extend the space-time ensemble Kalman filter proposed by Herrera, to estimate the hydraulic conductivity, together with hydraulic head and contaminant concentration, and its application in a synthetic example. The method has three steps: 1) Given the mean and the semivariogram of the natural logarithm of hydraulic conductivity (ln K), random realizations of this parameter are obtained through two alternatives: Gaussian simulation (SGSim) and Latin Hypercube Sampling method (LHC). 2) The stochastic model is used to produce hydraulic head (h) and contaminant (C) realizations, for each one of the conductivity realizations. With these realization the mean of ln K, h and C are obtained, for h and C, the mean is calculated in space and time, and also the cross covariance matrix h-ln K-C in space and time. The covariance matrix is obtained averaging products of the ln K, h and C realizations on the estimation points and times, and the positions and times with data of the analyzed variables. The estimation points are the positions at which estimates of ln K, h or C are gathered. In an analogous way, the estimation times are those at which estimates of any of the three variables are gathered. 3) Finally the ln K, h and C estimate are obtained using the space-time ensemble Kalman filter. The realization mean for each one

  20. Surface Runoff Estimation Using SMOS Observations, Rain-gauge Measurements and Satellite Precipitation Estimations. Comparison with Model Predictions

    Science.gov (United States)

    Garcia Leal, Julio A.; Lopez-Baeza, Ernesto; Khodayar, Samiro; Estrela, Teodoro; Fidalgo, Arancha; Gabaldo, Onofre; Kuligowski, Robert; Herrera, Eddy

    Surface runoff is defined as the amount of water that originates from precipitation, does not infiltrates due to soil saturation and therefore circulates over the surface. A good estimation of runoff is useful for the design of draining systems, structures for flood control and soil utilisation. For runoff estimation there exist different methods such as (i) rational method, (ii) isochrone method, (iii) triangular hydrograph, (iv) non-dimensional SCS hydrograph, (v) Temez hydrograph, (vi) kinematic wave model, represented by the dynamics and kinematics equations for a uniforme precipitation regime, and (vii) SCS-CN (Soil Conservation Service Curve Number) model. This work presents a way of estimating precipitation runoff through the SCS-CN model, using SMOS (Soil Moisture and Ocean Salinity) mission soil moisture observations and rain-gauge measurements, as well as satellite precipitation estimations. The area of application is the Jucar River Basin Authority area where one of the objectives is to develop the SCS-CN model in a spatial way. The results were compared to simulations performed with the 7-km COSMO-CLM (COnsortium for Small-scale MOdelling, COSMO model in CLimate Mode) model. The use of SMOS soil moisture as input to the COSMO-CLM model will certainly improve model simulations.

  1. Mathematical models for atmospheric pollutants. Appendix D. Available air quality models. Final report

    International Nuclear Information System (INIS)

    Drake, R.L.; McNaughton, D.J.; Huang, C.

    1979-08-01

    Models that are available for the analysis of airborne pollutants are summarized. In addition, recommendations are given concerning the use of particular models to aid in particular air quality decision making processes. The air quality models are characterized in terms of time and space scales, steady state or time dependent processes, reference frames, reaction mechanisms, treatment of turbulence and topography, and model uncertainty. Using these characteristics, the models are classified in the following manner: simple deterministic models, such as air pollution indices, simple area source models and rollback models; statistical models, such as averaging time models, time series analysis and multivariate analysis; local plume and puff models; box and multibox models; finite difference or grid models; particle models; physical models, such as wind tunnels and liquid flumes; regional models; and global models

  2. The Estimation Modelling of Damaged Areas by Harmful Animals

    Science.gov (United States)

    Jang, R.; Sung, M.; Hwang, J.; Jeon, S. W.

    2017-12-01

    The Republic of Korea has undergone rapid development and urban development without sufficient consideration of the environment. This type of growth is accompanied by a reduction in forest area and wildlife habitat. It is a phenomenon that affects the habitat of large mammals more than small. Especially in Korea, the damage caused by wild boar(Sus scrofa) is harsher than other large mammalian species like water deer(Hydropotes inermis), which also means that the number of these reported cases of this species is higher than ones of other mammals. Wild boar has three to eight cubs per year and it is possible to breed every year, which makes it more populous comparing with the fragmented habitats. It could be regarded as one of the top predators in Korea, which it is inevitable for humans to intervene this creature in population control. In addition, some individuals have been forced to be retreated from other habitats in major habitats, or to invade human activity areas for food activity, thereby destroying crops. Ultimately, this mammal species has been treated as farm pest animals through committing road kills and urban emergences. In this study, we has estimated possible farm pest animal present points from the damage district using 2,505 hazardous wildlife damage areas with four types of geological informations, four kinds of forest information, land cover, and distribution of farmland occurred in Gyeongnam province in Korea. In the estimating model, utilizing MAXENT, information of background point was set to 10,000, 70% of the damaged sites were used to construct the model, 30% was used for verification, and 10 times of crossvalidate were proceeded - verified by AUC of ROC. As a result of analyses, AUC was 0.847, and the percent contribution of the forest information was the distance toward inner-forest areas, 36.1%, the land cover, 16.5%, the distance from the field, 14.9%. Furthermore, the permutation importance was 24.9% of the cover, 12.3% of the height

  3. Estimating national forest carbon stocks and dynamics: combining models and remotely sensed information

    Science.gov (United States)

    Smallman, Thomas Luke; Exbrayat, Jean-François; Bloom, Anthony; Williams, Mathew

    2017-04-01

    Forests are a critical component of the global carbon cycle, storing significant amounts of carbon, split between living biomass and dead organic matter. The carbon budget of forests is the most uncertain component of the global carbon cycle - it is currently impossible to quantify accurately the carbon source/sink strength of forest biomes due to their heterogeneity and complex dynamics. It has been a major challenge to generate robust carbon budgets across landscapes due to data scarcity. Models have been used for estimating carbon budgets, but outputs have lacked an assessment of uncertainty, making a robust assessment of their reliability and accuracy challenging. Here a Metropolis Hastings - Markov Chain Monte Carlo (MH-MCMC) data assimilation framework has been used to combine remotely sensed leaf area index (MODIS), biomass (where available) and deforestation estimates, in addition to forest planting information from the UK's national forest inventory, an estimate of soil carbon from the Harmonized World Database (HWSD) and plant trait information with a process model (DALEC) to produce a constrained analysis with a robust estimate of uncertainty of the UK forestry carbon budget between 2000 and 2010. Our analysis estimates the mean annual UK forest carbon sink at -3.9 MgC ha-1 yr-1 with a 95 % confidence interval between -4.0 and -3.1 MgC ha-1yr-1. The UK national forest inventory (NFI) estimates the mean UK forest carbon sink to be between -1.4 and -5.5 MgC ha-1 yr-1. The analysis estimate for total forest biomass stock in 2010 is estimated at 229 (177/232) TgC, while the NFI an estimated total forest biomass carbon stock of 216 TgC. Leaf carbon area (LCA) is a key plant trait which we are able to estimate using our analysis. Comparison of median estimates for (LCA) retrieved from the analysis and a UK land cover map show higher and lower values for LCA are estimated areas dominated by needle leaf and broad leaf forests forest respectively, consistent with

  4. Search for the standard model Higgs boson in tau final states

    NARCIS (Netherlands)

    Abazov, V.M.; et al., [Unknown; Ancu, L.S.; de Jong, S.J.; Filthaut, F.; Galea, C.F.; Hegeman, J.G.; Houben, P.; Meijer, M.M.; Svoisky, P.; van den Berg, P.J.; van Leeuwen, W.M.

    2009-01-01

    We present a search for the standard model Higgs boson using hadronically decaying tau leptons, in 1 fb(-1) of data collected with the D0 detector at the Fermilab Tevatron p(p)over-bar collider. We select two final states: tau(+/-) plus missing transverse energy and b jets, and tau(+)tau(-) plus

  5. An Exploratory Study: Assessment of Modeled Dioxin Exposure in Ceramic Art Studios (Final Report, 2008)

    Science.gov (United States)

    EPA announced the availability of the final report, An Exploratory Study: Assessment of Modeled Dioxin Exposure in Ceramic Art Studios. This report investigates the potential dioxin exposure to artists/hobbyists who use ball clay to make pottery and related products. Derm...

  6. Improving Water Balance Estimation in the Nile by Combining Remote Sensing and Hydrological Modelling: a Template for Ungauged Basins

    Science.gov (United States)

    Gleason, C. J.; Wada, Y.; Wang, J.

    2017-12-01

    Declining gauging infrastructure and fractious water politics have decreased available information about river flows globally, especially in international river basins. Remote sensing and water balance modelling are frequently cited as a potential solutions, but these techniques largely rely on the same in decline gauge data to constrain or parameterize discharge estimates, thus creating a circular approach to estimating discharge inapplicable to ungauged basins. To address this, we here combine a discontinued gauge, remotely sensed discharge estimates made via at-many-stations hydraulic geometry (AMHG) and Landsat data, and the PCR-GLOBWB hydrological model to estimate discharge for an ungauged time period for the Lower Nile (1978-present). Specifically, we first estimate initial discharges from 86 Landsat images and AMHG (1984-2015), and then use these flow estimates to tune the hydrologic model. Our tuning methodology is purposefully simple and can be easily applied to any model without the need for calibration/parameterization. The resulting tuned modelled hydrograph shows large improvement in flow magnitude over previous modelled hydrographs, and validation of tuned monthly model output flows against the historical gauge yields an RMSE of 343 m3/s (33.7%). By contrast, the original simulation had an order-of-magnitude flow error. This improvement is substantial but not perfect: modelled flows have a one-to two-month wet season lag and a negative bias. More sophisticated model calibration and training (e.g. data assimilation) is needed to improve upon our results, however, our results achieved by coupling physical models and remote sensing is a promising first step and proof of concept toward future modelling of ungauged flows. This is especially true as massive cloud computing via Google Earth Engine makes our method easily applicable to any basin without current gauges. Finally, we purposefully do not offer prescriptive solutions for Nile management, and

  7. Model to estimate fractal dimension for ion-bombarded materials

    Energy Technology Data Exchange (ETDEWEB)

    Hu, A., E-mail: hu77@purdue.edu; Hassanein, A.

    2014-03-15

    Comprehensive fractal Monte Carlo model ITMC-F (Hu and Hassanein, 2012 [1]) is developed based on the Monte Carlo ion bombardment simulation code, i.e., Ion Transport in Materials and Compounds (ITMC) code (Hassanein, 1985 [2]). The ITMC-F studies the impact of surface roughness on the angular dependence of sputtering yield. Instead of assuming material surfaces to be flat or composed of exact self-similar fractals in simulation, we developed a new method to describe the surface shapes. Random fractal surfaces which are generated by midpoint displacement algorithm and support vector machine algorithm are combined with ITMC. With this new fractal version of ITMC-F, we successfully simulated the angular dependence of sputtering yield for various ion-target combinations, with the input surface roughness exponent directly depicted from experimental data (Hu and Hassanein, 2012 [1]). The ITMC-F code showed good agreement with the experimental data. In advanced, we compare other experimental sputtering yield with the results from ITMC-F to estimate the surface roughness exponent for ion-bombarded material in this research.

  8. Model dependencies of risk aversion and working interest estimates

    International Nuclear Information System (INIS)

    Lerche, I.

    1996-01-01

    Working interest, W, and risk adjusted value, RAV, are evaluated using both Cozzolino's formula for exponential dependence of risk aversion and also for a hyperbolic tangent dependence. In addition, the general method is given of constructing an RAV formula for any functional choice of risk aversion dependence. Two examples are given to illustrate how the model dependencies influence choices of working interest and risk adjusted value depending on whether the expected value of the project is positive or negative. In general the Cozzolino formula provides a more conservative position for risk than does the hyperbolic tangent formula, reflecting the difference in corporate attitudes to risk aversion. The commonly used Cozzolino formula is shown to have simple exact arithmetic expressions for maximum working interest and maximum RAV; the hyperbolic tangent formula has approximate analytic expressions. Both formulae also yield approximate analytical expressions for the working interest yielding a risk neutral RAV of zero. These arithmetic results are useful for making quick estimates of working interest ranges and risk adjusted values. (Author)

  9. Semi-Nonparametric Estimation and Misspecification Testing of Diffusion Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis

    of the estimators and tests under the null are derived, and the power properties are analyzed by considering contiguous alternatives. Test directly comparing the drift and diffusion estimators under the relevant null and alternative are also analyzed. Markov Bootstrap versions of the test statistics are proposed...... to improve on the finite-sample approximations. The finite sample properties of the estimators are examined in a simulation study....

  10. Estimating dose painting effects in radiotherapy: a mathematical model.

    Directory of Open Access Journals (Sweden)

    Juan Carlos López Alfonso

    Full Text Available Tumor heterogeneity is widely considered to be a determinant factor in tumor progression and in particular in its recurrence after therapy. Unfortunately, current medical techniques are unable to deduce clinically relevant information about tumor heterogeneity by means of non-invasive methods. As a consequence, when radiotherapy is used as a treatment of choice, radiation dosimetries are prescribed under the assumption that the malignancy targeted is of a homogeneous nature. In this work we discuss the effects of different radiation dose distributions on heterogeneous tumors by means of an individual cell-based model. To that end, a case is considered where two tumor cell phenotypes are present, which we assume to strongly differ in their respective cell cycle duration and radiosensitivity properties. We show herein that, as a result of such differences, the spatial distribution of the corresponding phenotypes, whence the resulting tumor heterogeneity can be predicted as growth proceeds. In particular, we show that if we start from a situation where a majority of ordinary cancer cells (CCs and a minority of cancer stem cells (CSCs are randomly distributed, and we assume that the length of CSC cycle is significantly longer than that of CCs, then CSCs become concentrated at an inner region as tumor grows. As a consequence we obtain that if CSCs are assumed to be more resistant to radiation than CCs, heterogeneous dosimetries can be selected to enhance tumor control by boosting radiation in the region occupied by the more radioresistant tumor cell phenotype. It is also shown that, when compared with homogeneous dose distributions as those being currently delivered in clinical practice, such heterogeneous radiation dosimetries fare always better than their homogeneous counterparts. Finally, limitations to our assumptions and their resulting clinical implications will be discussed.

  11. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative-quantitative modeling.

    Science.gov (United States)

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-05-01

    Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/

  12. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling

    Science.gov (United States)

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-01-01

    Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270

  13. A Modelling Framework for estimating Road Segment Based On-Board Vehicle Emissions

    International Nuclear Information System (INIS)

    Lin-Jun, Yu; Ya-Lan, Liu; Yu-Huan, Ren; Zhong-Ren, Peng; Meng, Liu Meng

    2014-01-01

    Traditional traffic emission inventory models aim to provide overall emissions at regional level which cannot meet planners' demand for detailed and accurate traffic emissions information at the road segment level. Therefore, a road segment-based emission model for estimating light duty vehicle emissions is proposed, where floating car technology is used to collect information of traffic condition of roads. The employed analysis framework consists of three major modules: the Average Speed and the Average Acceleration Module (ASAAM), the Traffic Flow Estimation Module (TFEM) and the Traffic Emission Module (TEM). The ASAAM is used to obtain the average speed and the average acceleration of the fleet on each road segment using FCD. The TFEM is designed to estimate the traffic flow of each road segment in a given period, based on the speed-flow relationship and traffic flow spatial distribution. Finally, the TEM estimates emissions from each road segment, based on the results of previous two modules. Hourly on-road light-duty vehicle emissions for each road segment in Shenzhen's traffic network are obtained using this analysis framework. The temporal-spatial distribution patterns of the pollutant emissions of road segments are also summarized. The results show high emission road segments cluster in several important regions in Shenzhen. Also, road segments emit more emissions during rush hours than other periods. The presented case study demonstrates that the proposed approach is feasible and easy-to-use to help planners make informed decisions by providing detailed road segment-based emission information

  14. An integrated model for estimating energy cost of a tidal current turbine farm

    International Nuclear Information System (INIS)

    Li, Ye; Lence, Barbara J.; Calisal, Sander M.

    2011-01-01

    A tidal current turbine is a device for harnessing energy from tidal currents and functions in a manner similar to a wind turbine. A tidal current turbine farm consists of a group of tidal current turbines distributed in a site where high-speed current is available. The accurate prediction of energy cost of a tidal current turbine farm is important to the justification of planning and constructing such a farm. However, the existing approaches used to predict energy cost of tidal current turbine farms oversimplify the hydrodynamic interactions between turbines in energy prediction and oversimplify the operation and maintenance strategies involved in cost estimation as well as related fees. In this paper, we develop a model, which integrates a marine hydrodynamic model with high accuracy for predicting energy output and a comprehensive cost-effective operation and maintenance model for estimating the cost that may be incurred in producing the energy, to predict energy cost from a tidal current turbine farm. This model is expected to be able to simulate more complicated cases and generate more accurate results than existing models. As there is no real tidal current turbine farm, we validate this model with offshore wind studies. Finally, case studies about Vancouver are conducted with a scenario-based analysis. We minimize the energy cost by minimizing the total cost and maximizing the total power output under constraints related to the local conditions (e.g., geological and labor information) and the turbine specifications. The results suggest that tidal current energy is about ready to penetrate the electricity market in some major cities in North America if learning curve for the operational and maintenance is minimum. (author)

  15. Estimating release of polycyclic aromatic hydrocarbons from coal-tar contaminated soil at manufactured gas plant sites. Final report

    International Nuclear Information System (INIS)

    Lee, L.S.

    1998-04-01

    One of EPRI's goals regarding the environmental behavior of organic substances consists of developing information and predictive tools to estimate the release potential of polycyclic aromatic hydrocarbons (PAHs) from contaminated soils at manufactured gas (MGP) plant sites. A proper assessment of the distribution of contaminants under equilibrium conditions and the potential for mass-transfer constraints is essential in evaluating the environmental risks of contaminants in the subsurface at MGP sites and for selecting remediation options. The results of this research provide insights into estimating maximum release concentrations of PAHs from MGP soils that have been contaminated by direct contact with the tar or through years of contact with contaminated groundwater. Attention is also given to evaluating the use of water-miscible cosolvents for estimating aqueous phase concentrations, and assessing the role of mass-transfer constraints in the release of PAHs from MGP site soils

  16. Urban micro-scale flood risk estimation with parsimonious hydraulic modelling and census data

    Directory of Open Access Journals (Sweden)

    C. Arrighi

    2013-05-01

    Full Text Available The adoption of 2007/60/EC Directive requires European countries to implement flood hazard and flood risk maps by the end of 2013. Flood risk is the product of flood hazard, vulnerability and exposure, all three to be estimated with comparable level of accuracy. The route to flood risk assessment is consequently much more than hydraulic modelling of inundation, that is hazard mapping. While hazard maps have already been implemented in many countries, quantitative damage and risk maps are still at a preliminary level. A parsimonious quasi-2-D hydraulic model is here adopted, having many advantages in terms of easy set-up. It is here evaluated as being accurate in flood depth estimation in urban areas with a high-resolution and up-to-date Digital Surface Model (DSM. The accuracy, estimated by comparison with marble-plate records of a historic flood in the city of Florence, is characterized in the downtown's most flooded area by a bias of a very few centimetres and a determination coefficient of 0.73. The average risk is found to be about 14 € m−2 yr−1, corresponding to about 8.3% of residents' income. The spatial distribution of estimated risk highlights a complex interaction between the flood pattern and the building characteristics. As a final example application, the estimated risk values have been used to compare different retrofitting measures. Proceeding through the risk estimation steps, a new micro-scale potential damage assessment method is proposed. This is based on the georeferenced census system as the optimal compromise between spatial detail and open availability of socio-economic data. The results of flood risk assessment at the census section scale resolve most of the risk spatial variability, and they can be easily aggregated to whatever upper scale is needed given that they are geographically defined as contiguous polygons. Damage is calculated through stage–damage curves, starting from census data on building type and

  17. Estimating the Autocorrelated Error Model with Trended Data: Further Results,

    Science.gov (United States)

    1979-11-01

    Perhaps the most serious deficiency of OLS in the presence of autocorrelation is not inefficiency but bias in its estimated standard errors--a bias...k for all t has variance var(b) = o2/ Tk2 2This refutes Maeshiro’s (1976) conjecture that "an estimator utilizing relevant extraneous information

  18. The Spatial Fay-Herriot Model in Poverty Estimation

    Directory of Open Access Journals (Sweden)

    Wawrowski Łukasz

    2016-12-01

    Full Text Available Counteracting poverty is one of the objectives of the European Commission clearly emphasized in the Europe 2020 strategy. Conducting appropriate social policy requires knowledge of the extent of this phenomenon. Such information is provided through surveys on living conditions conducted by, among others, the Central Statistical Office (CSO. Nevertheless, the sample size in these surveys allows for a precise estimation of poverty rate only at a very general level - the whole country and regions. Small sample size at the lower level of spatial aggregation results in a large variance of obtained estimates and hence lower reliability. To obtain information in sparsely represented territorial sections, methods of small area estimation are used. Through using the information from other sources, such as censuses and administrative registers, it is possible to estimate distribution parameters with smaller variance than in the case of direct estimation.

  19. Comparison of modeled estimates of inhalation exposure to aerosols during use of consumer spray products.

    Science.gov (United States)

    Park, Jihoon; Yoon, Chungsik; Lee, Kiyoung

    2018-05-30

    In the field of exposure science, various exposure assessment models have been developed to complement experimental measurements; however, few studies have been published on their validity. This study compares the estimated inhaled aerosol doses of several inhalation exposure models to experimental measurements of aerosols released from consumer spray products, and then compares deposited doses within different parts of the human respiratory tract according to deposition models. Exposure models, including the European Center for Ecotoxicology of Chemicals Targeted Risk Assessment (ECETOC TRA), the Consumer Exposure Model (CEM), SprayExpo, ConsExpo Web and ConsExpo Nano, were used to estimate the inhaled dose under various exposure scenarios, and modeled and experimental estimates were compared. The deposited dose in different respiratory regions was estimated using the International Commission on Radiological Protection model and multiple-path particle dosimetry models under the assumption of polydispersed particles. The modeled estimates of the inhaled doses were accurate in the short term, i.e., within 10 min of the initial spraying, with a differences from experimental estimates ranging from 0 to 73% among the models. However, the estimates for long-term exposure, i.e., exposure times of several hours, deviated significantly from the experimental estimates in the absence of ventilation. The differences between the experimental and modeled estimates of particle number and surface area were constant over time under ventilated conditions. ConsExpo Nano, as a nano-scale model, showed stable estimates of short-term exposure, with a difference from the experimental estimates of less than 60% for all metrics. The deposited particle estimates were similar among the deposition models, particularly in the nanoparticle range for the head airway and alveolar regions. In conclusion, the results showed that the inhalation exposure models tested in this study are suitable

  20. Benefits Estimation Model for Automated Vehicle Operations: Phase 2 Final Report

    Science.gov (United States)

    2018-01-01

    Automated vehicles have the potential to bring about transformative safety, mobility, energy, and environmental benefits to the surface transportation system. They are also being introduced into a complex transportation system, where second-order imp...