Thermodynamic properties of organic compounds estimation methods, principles and practice
Janz, George J
1967-01-01
Thermodynamic Properties of Organic Compounds: Estimation Methods, Principles and Practice, Revised Edition focuses on the progression of practical methods in computing the thermodynamic characteristics of organic compounds. Divided into two parts with eight chapters, the book concentrates first on the methods of estimation. Topics presented are statistical and combined thermodynamic functions; free energy change and equilibrium conversions; and estimation of thermodynamic properties. The next discussions focus on the thermodynamic properties of simple polyatomic systems by statistical the
Estimation of oil reservoir thermal properties through temperature log data using inversion method
International Nuclear Information System (INIS)
Cheng, Wen-Long; Nian, Yong-Le; Li, Tong-Tong; Wang, Chang-Long
2013-01-01
Oil reservoir thermal properties not only play an important role in steam injection well heat transfer, but also are the basic parameters for evaluating the oil saturation in reservoir. In this study, for estimating reservoir thermal properties, a novel heat and mass transfer model of steam injection well was established at first, this model made full analysis on the wellbore-reservoir heat and mass transfer as well as the wellbore-formation, and the simulated results by the model were quite consistent with the log data. Then this study presented an effective inversion method for estimating the reservoir thermal properties through temperature log data. This method is based on the heat transfer model in steam injection wells, and can be used to predict the thermal properties as a stochastic approximation method. The inversion method was applied to estimate the reservoir thermal properties of two steam injection wells, it was found that the relative error of thermal conductivity for the two wells were 2.9% and 6.5%, and the relative error of volumetric specific heat capacity were 6.7% and 7.0%,which demonstrated the feasibility of the proposed method for estimating the reservoir thermal properties. - Highlights: • An effective inversion method for predicting the oil reservoir thermal properties was presented. • A novel model for steam injection well made full study on the wellbore-reservoir heat and mass transfer. • The wellbore temperature field and steam parameters can be simulated by the model efficiently. • Both reservoirs and formation thermal properties could be estimated simultaneously by the proposed method. • The estimated steam temperature was quite consistent with the field data
Directory of Open Access Journals (Sweden)
Taylor Mac Intyer Fonseca Junior
2013-12-01
Full Text Available This work evaluate seven estimation methods of fatigue properties applied to stainless steels and aluminum alloys. Experimental strain-life curves are compared to the estimations obtained by each method. After applying seven different estimation methods at 14 material conditions, it was found that fatigue life can be estimated with good accuracy only by the Bäumel-Seeger method for the martensitic stainless steel tempered between 300°C and 500°C. The differences between mechanical behavior during monotonic and cyclic loading are probably the reason for the absence of a reliable method for estimation of fatigue behavior from monotonic properties for a group of materials.
Health effects estimation: Methods and results for uranium mill tailings contaminated properties
International Nuclear Information System (INIS)
Denham, D.H.; Cross, F.T.; Soldat, J.K.
1990-01-01
This paper describes methods for estimating potential health effects from exposure to uranium mill tailings and presents a summary of risk projections for 50 contaminated properties (residences, schools, churches, and businesses) in the US. The methods provide realistic estimates of cancer risk to exposed individuals based on property-specific occupancy and contamination patterns. External exposure to gamma radiation, inhalation of radon daughters, and consumption of food products grown in radium-contaminated soil are considered. Most of the projected risk was from indoor exposure to radon daughters; however, for some properties the risk from consumption of locally grown food products is similar to that from radon daughters. In all cases, the projected number of lifetime cancer deaths for specific properties is less than one, but for some properties the increase in risk over that normally expected is greater than 100%
Estimation of magnetocaloric properties by using Monte Carlo method for AMRR cycle
International Nuclear Information System (INIS)
Arai, R; Fukuda, H; Numazawa, T; Tamura, R; Li, J; Saito, A T; Nakagome, H; Kaji, S
2015-01-01
In order to achieve a wide refrigerating temperature range in magnetic refrigeration, it is effective to layer multiple materials with different Curie temperatures. It is crucial to have a detailed understanding of physical properties of materials to optimize the material selection and the layered structure. In the present study, we discuss methods for estimating a change in physical properties, particularly the Curie temperature when some of the Gd atoms are substituted for non-magnetic elements for material design, based on Gd as a ferromagnetic material which is a typical magnetocaloric material. For this purpose, whilst making calculations using the S=7/2 Ising model and the Monte Carlo method, we made a specific heat measurement and a magnetization measurement of Gd-R alloy (R = Y, Zr) to compare experimental values and calculated ones. The results showed that the magnetic entropy change, specific heat, and Curie temperature can be estimated with good accuracy using the Monte Carlo method. (paper)
Estimation of Physical Properties of Amino Acids by Group-Contribution Method
DEFF Research Database (Denmark)
Jhamb, Spardha Virendra; Liang, Xiaodong; Gani, Rafiqul
2018-01-01
In this paper, we present group-contribution (GC) based property models for estimation of physical properties of amino acids using their molecular structural information. The physical properties modelled in this work are normal melting point (Tm), aqueous solubility (Ws), and octanol....../water partition coefficient (Kow) of amino acids. The developed GC-models are based on the published GC-method by Marrero and Gani (J. Marrero, R. Gani, Fluid Phase Equilib. 2001, 183-184, 183-208) with inclusion of new structural parameters (groups and molecular weight of compounds). The main objective...... of introducing these new structural parameters in the GC-model is to provide additional structural information for amino acids having large and complex structures and thereby improve predictions of physical properties of amino acids. The group-contribution values were calculated by regression analysis using...
A method for estimation of plasma albumin concentration from the buffering properties of whole blood
DEFF Research Database (Denmark)
Rees, Stephen Edward; Diemer, Tue; Kristensen, Søren Risom
2012-01-01
measurements of acid-base and oxygenation status. This article presents and evaluates a new method for doing so. MATERIALS AND METHODS: The mathematical method for estimating plasma albumin concentration is described. To evaluate the method at numerous albumin concentrations, blood from 19 healthy subjects......PURPOSE: Hypoalbuminemia is strongly associated with poor clinical outcome. Albumin is usually measured at the central laboratory rather than point of care, but in principle, information exists in the buffering properties of whole blood to estimate plasma albumin concentration from point of care...
Miller, Renee; Kolipaka, Arunark; Nash, Martyn P; Young, Alistair A
2018-03-12
Magnetic resonance elastography (MRE) has been used to estimate isotropic myocardial stiffness. However, anisotropic stiffness estimates may give insight into structural changes that occur in the myocardium as a result of pathologies such as diastolic heart failure. The virtual fields method (VFM) has been proposed for estimating material stiffness from image data. This study applied the optimised VFM to identify transversely isotropic material properties from both simulated harmonic displacements in a left ventricular (LV) model with a fibre field measured from histology as well as isotropic phantom MRE data. Two material model formulations were implemented, estimating either 3 or 5 material properties. The 3-parameter formulation writes the transversely isotropic constitutive relation in a way that dissociates the bulk modulus from other parameters. Accurate identification of transversely isotropic material properties in the LV model was shown to be dependent on the loading condition applied, amount of Gaussian noise in the signal, and frequency of excitation. Parameter sensitivity values showed that shear moduli are less sensitive to noise than the other parameters. This preliminary investigation showed the feasibility and limitations of using the VFM to identify transversely isotropic material properties from MRE images of a phantom as well as simulated harmonic displacements in an LV geometry. Copyright © 2018 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Jeon, Woo Soo; Song, Ji Ho
2001-01-01
An expert system for estimation of fatigue properties from simple tensile data of material is developed, considering nearly all important estimation methods proposed so far, i.e., 7 estimation methods. The expert system is developed to utilize for the case of only hardness data available. The knowledge base is constructed with production rules and frames using an expert system shell, UNIK. Forward chaining is employed as a reasoning method. The expert system has three functions including the function to update the knowledge base. The performance of the expert system is tested using the 54 ε-N curves consisting of 381 ε-N data points obtained for 22 materials. It is found that the expert system developed has excellent performance especially for steel materials, and reasonably good for aluminum alloys
International Nuclear Information System (INIS)
Lachheb, Mohamed; Karkri, Mustapha; Albouchi, Fethi; Mzali, Foued; Nasrallah, Sassi Ben
2014-01-01
Highlights: • Preparation of paraffin/graphite composites by uni-axial compression technique. • Measurement of thermophysical properties of paraffin/graphite using the periodic method. • Measurement of the experimental densities of paraffin/graphite composites. • Prediction of the effective thermal conductivity using analytical models. - Abstract: In this paper, two types of graphite were combined with paraffin in an attempt to improve thermal conductivity of paraffin phase change material (PCM): Synthetic graphite (Timrex SFG75) and graphite waste obtained from damaged Tubular graphite Heat Exchangers. These paraffin/graphite phase change material (PCM) composites are prepared by the cold uniaxial compression technique and the thermophysical properties were estimated using a periodic temperature method and an inverse technique. Results showed that the thermal conductivity and thermal diffusivity are greatly influenced by the graphite addition
Directory of Open Access Journals (Sweden)
Mohd Faris Dziauddin
2017-07-01
Full Text Available This study estimates the effect of locational attributes on residential property values in Kuala Lumpur, Malaysia. Geographically weighted regression (GWR enables the use of the local parameter rather than the global parameter to be estimated, with the results presented in map form. The results of this study reveal that residential property values are mainly determined by the property’s physical (structural attributes, but proximity to locational attributes also contributes marginally. The use of GWR in this study is considered a better approach than other methods to examine the effect of locational attributes on residential property values. GWR has the capability to produce meaningful results in which different locational attributes have differential spatial effects across a geographical area on residential property values. This method has the ability to determine the factors on which premiums depend, and in turn it can assist the government in taxation matters.
DEFF Research Database (Denmark)
Kærn, Martin Ryhl; Modi, Anish; Jensen, Jonas Kjær
2015-01-01
Transport properties of fluids are indispensable for heat exchanger design. The methods for estimating the transport properties of ammonia–water mixtures are not well established in the literature. The few existent methods are developed from none or limited, sometimes inconsistent experimental...... of ammonia–water mixtures. Firstly, the different methods are introduced and compared at various temperatures and pressures. Secondly, their individual influence on the required heat exchanger size (surface area) is investigated. For this purpose, two case studies related to the use of the Kalina cycle...... the interpolative methods in contrast to the corresponding state methods. Nevertheless, all possible mixture transport property combinations used herein resulted in a heat exchanger size within 4.3 % difference for the flue-gas heat recovery boiler, and within 12.3 % difference for the oil-based boiler....
A Group Contribution Method for Estimating Cetane and Octane Numbers
Energy Technology Data Exchange (ETDEWEB)
Kubic, William Louis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Process Modeling and Analysis Group
2016-07-28
Much of the research on advanced biofuels is devoted to the study of novel chemical pathways for converting nonfood biomass into liquid fuels that can be blended with existing transportation fuels. Many compounds under consideration are not found in the existing fuel supplies. Often, the physical properties needed to assess the viability of a potential biofuel are not available. The only reliable information available may be the molecular structure. Group contribution methods for estimating physical properties from molecular structure have been used for more than 60 years. The most common application is estimation of thermodynamic properties. More recently, group contribution methods have been developed for estimating rate dependent properties including cetane and octane numbers. Often, published group contribution methods are limited in terms of types of function groups and range of applicability. In this study, a new, broadly-applicable group contribution method based on an artificial neural network was developed to estimate cetane number research octane number, and motor octane numbers of hydrocarbons and oxygenated hydrocarbons. The new method is more accurate over a greater range molecular weights and structural complexity than existing group contribution methods for estimating cetane and octane numbers.
Spectrum estimation method based on marginal spectrum
International Nuclear Information System (INIS)
Cai Jianhua; Hu Weiwen; Wang Xianchun
2011-01-01
FFT method can not meet the basic requirements of power spectrum for non-stationary signal and short signal. A new spectrum estimation method based on marginal spectrum from Hilbert-Huang transform (HHT) was proposed. The procession of obtaining marginal spectrum in HHT method was given and the linear property of marginal spectrum was demonstrated. Compared with the FFT method, the physical meaning and the frequency resolution of marginal spectrum were further analyzed. Then the Hilbert spectrum estimation algorithm was discussed in detail, and the simulation results were given at last. The theory and simulation shows that under the condition of short data signal and non-stationary signal, the frequency resolution and estimation precision of HHT method is better than that of FFT method. (authors)
DEFF Research Database (Denmark)
Frutiger, Jerome; Abildskov, Jens; Sin, Gürkan
Process safety studies and assessments rely on accurate property data. Flammability data like the lower and upper flammability limit (LFL and UFL) play an important role in quantifying the risk of fire and explosion. If experimental values are not available for the safety analysis due to cost...... or time constraints, property prediction models like group contribution (GC) models can estimate flammability data. The estimation needs to be accurate, reliable and as less time consuming as possible. However, GC property prediction methods frequently lack rigorous uncertainty analysis. Hence....... In this study, the MG-GC-factors are estimated using a systematic data and model evaluation methodology in the following way: 1) Data. Experimental flammability data is used from AIChE DIPPR 801 Database. 2) Initialization and sequential parameter estimation. An approximation using linear algebra provides...
Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.
Storlie, Curtis B; Bondell, Howard D; Reich, Brian J; Zhang, Hao Helen
2011-04-01
Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting.
Directory of Open Access Journals (Sweden)
Zhongqiang Xiong
2018-01-01
Full Text Available In this work, trying to avoid difficulty of application due to the irregular filler shapes in experiments, self-consistent and differential self-consistent methods were combined to obtain a decoupled equation. The combined method suggests a tenor γ independent of filler-contents being an important connection between high and low filler-contents. On one hand, the constant parameter can be calculated by Eshelby’s inclusion theory or the Mori–Tanaka method to predict effective properties of composites coinciding with its hypothesis. On the other hand, the parameter can be calculated with several experimental results to estimate the effective properties of prepared composites of other different contents. In addition, an evaluation index σ f ′ of the interactional strength between matrix and fillers is proposed based on experiments. In experiments, a hyper-dispersant was synthesized to prepare polypropylene/calcium carbonate (PP/CaCO3 composites up to 70 wt % of filler-content with dispersion, whose dosage was only 5 wt % of the CaCO3 contents. Based on several verifications, it is hoped that the combined self-consistent method is valid for other two-phase composites in experiments with the same application progress as in this work.
Applicability of book value to estimate property of Polish forest management units
Directory of Open Access Journals (Sweden)
Adamowicz Krzysztof
2017-09-01
Full Text Available At present, various solutions are proposed for the appraisal of forest land and stands. Recently an important problem concerning business information is connected with the valuation of assets of individual forest districts. In practice, numerous methods are used to estimate the property of enterprises. However, there are no universal methods to estimate the value of an enterprise. One of these practically applicable methods is the book value method (BVM. In view of the above, it was decided to analyse the applicability of this method to valuate the property of forest districts. Based on the conducted case study and the discussion of results, it was found that the original BV method may not be used to appraise the property of forest districts. The primary justification for rejection of this method in the valuation of forest district property results was from the lack of financial balance information on the value of forest land and stands. As a result, the value of a forest district calculated using the BVM is underestimated. A lesser but significant effect on the estimated value of forest districts is also observed for cash flows related with the Forest Fund. In the case of net contributors, the estimated value of forest districts is overestimated and that of net beneficiaries is underestimated.
Estimation of hydrologic properties of an unsaturated, fractured rock mass
International Nuclear Information System (INIS)
Klavetter, E.A.; Peters, R.R.
1986-07-01
In this document, two distinctly different approaches are used to develop continuum models to evaluate water movement in a fractured rock mass. Both models provide methods for estimating rock-mass hydrologic properties. Comparisons made over a range of different tuff properties show good qualitative and quantitative agreement between estimates of rock-mass hydrologic properties made by the two models. This document presents a general discussion of: (1) the hydrology of Yucca Mountain, and the conceptual hydrological model currently being used for the Yucca Mountain site, (2) the development of two models that may be used to estimate the hydrologic properties of a fractured, porous rock mass, and (3) a comparison of the hydrologic properties estimated by these two models. Although the models were developed in response to hydrologic characterization requirements at Yucca Mountain, they can be applied to water movement in any fractured rock mass that satisfies the given assumptions
Hukkerikar, Amol Shivajirao; Kalakul, Sawitree; Sarup, Bent; Young, Douglas M; Sin, Gürkan; Gani, Rafiqul
2012-11-26
The aim of this work is to develop group-contribution(+) (GC(+)) method (combined group-contribution (GC) method and atom connectivity index (CI) method) based property models to provide reliable estimations of environment-related properties of organic chemicals together with uncertainties of estimated property values. For this purpose, a systematic methodology for property modeling and uncertainty analysis is used. The methodology includes a parameter estimation step to determine parameters of property models and an uncertainty analysis step to establish statistical information about the quality of parameter estimation, such as the parameter covariance, the standard errors in predicted properties, and the confidence intervals. For parameter estimation, large data sets of experimentally measured property values of a wide range of chemicals (hydrocarbons, oxygenated chemicals, nitrogenated chemicals, poly functional chemicals, etc.) taken from the database of the US Environmental Protection Agency (EPA) and from the database of USEtox is used. For property modeling and uncertainty analysis, the Marrero and Gani GC method and atom connectivity index method have been considered. In total, 22 environment-related properties, which include the fathead minnow 96-h LC(50), Daphnia magna 48-h LC(50), oral rat LD(50), aqueous solubility, bioconcentration factor, permissible exposure limit (OSHA-TWA), photochemical oxidation potential, global warming potential, ozone depletion potential, acidification potential, emission to urban air (carcinogenic and noncarcinogenic), emission to continental rural air (carcinogenic and noncarcinogenic), emission to continental fresh water (carcinogenic and noncarcinogenic), emission to continental seawater (carcinogenic and noncarcinogenic), emission to continental natural soil (carcinogenic and noncarcinogenic), and emission to continental agricultural soil (carcinogenic and noncarcinogenic) have been modeled and analyzed. The application
Estimating the mechanical properties of the brittle deformation zones at Olkiluoto
International Nuclear Information System (INIS)
Hudson, J.A.; Cosgrove, J.W.; Johansson, E.
2008-09-01
In rock mechanics modelling to support repository design and safety assessment for the Olkiluoto site, it is necessary to obtain the relevant rock mechanics parameters, these being an essential pre-requisite for the modelling. The parameters include the rock stress state, the properties of the intact rock and the rock mass, and the properties of the brittle deformation zones which represent major discontinuities in the rock mass continuum. However, because of the size and irregularity of the brittle deformation zones, it is not easy to estimate their mechanical properties, i.e. their deformation and strength properties. Following Section 1 explaining the motivation for the work and the objective of the Report, in Sections 2 and 3, the types of fractures and brittle deformation zones that can be encountered are described with an indication of the mechanisms that lead to complex structures. The geology at Olkiluoto is then summarized in Section 4 within the context of this Report. The practical aspects of encountering the brittle deformation zones in outcrops, drillholes and excavations are described in Sections 5 and 6 with illustrative examples of drillhole core intersections in Section 7. The various theoretical, numerical and practical methods for estimating the mechanical properties of the brittle deformation zones are described in Section 8, together with a Table summarizing each method's advantages, disadvantages and utility in estimating the mechanical properties of the zones. We emphasise that the optimal approach to estimating the mechanical properties of the brittle deformation zones cannot be determined without a good knowledge, not only of each estimation method's capabilities and idiosyncrasies, but also of the structural geology background and the specific nature of the brittle deformation zones being characterized. Finally, in Section 9, a Table is presented outlining each method's applicability to the Olkiluoto site. A flowchart is included to
Estimating fair-market value for hydrocarbon producing properties
International Nuclear Information System (INIS)
Garb, F.A.
1996-01-01
The generally accepted appraisal methods used to evaluate hydrocarbon properties and prospects were described. Fair-market-value (FMV) estimates have been used in the petroleum industry in attempts to protect a purchaser against an unwise acquisition, or conversely, to establish a just price to compensate a seller. Four methods were identified for determining FMV for hydrocarbon producing properties. They are: (1) comparative sales, (2) rule of thumb, (3) income forecast, and (4) replacement cost. The differences between oil and gas FMV and real estate FMV were explained
Effective Mechanical Property Estimation of Composite Solid Propellants Based on VCFEM
Directory of Open Access Journals (Sweden)
Liu-Lei Shen
2018-01-01
Full Text Available A solid rocket motor is one of the critical components of solid missiles, and its life and reliability mostly depend on the mechanical behavior of a composite solid propellant (CSP. Effective mechanical properties are critical material constants to analyze the structural integrity of propellant grain. They are estimated by a numerical method that combines the Voronoi cell finite element method (VCFEM and the homogenization method in the present paper. The correctness of this combined method has been validated by comparing with a standard finite element method and conventional theoretical models. The effective modulus and the effective Poisson’s ratio of a CSP varying with volume fraction and component material properties are estimated. The result indicates that the variations of the volume fraction of inclusions and the properties of the matrix have obvious influences on the effective mechanical properties of a CSP. The microscopic numerical analysis method proposed in this paper can also be used to provide references for the design and the analysis of other large volume fraction composite materials.
Estimation of mechanical properties of nanomaterials using artificial intelligence methods
Vijayaraghavan, V.; Garg, A.; Wong, C. H.; Tai, K.
2014-09-01
Computational modeling tools such as molecular dynamics (MD), ab initio, finite element modeling or continuum mechanics models have been extensively applied to study the properties of carbon nanotubes (CNTs) based on given input variables such as temperature, geometry and defects. Artificial intelligence techniques can be used to further complement the application of numerical methods in characterizing the properties of CNTs. In this paper, we have introduced the application of multi-gene genetic programming (MGGP) and support vector regression to formulate the mathematical relationship between the compressive strength of CNTs and input variables such as temperature and diameter. The predictions of compressive strength of CNTs made by these models are compared to those generated using MD simulations. The results indicate that MGGP method can be deployed as a powerful method for predicting the compressive strength of the carbon nanotubes.
International Nuclear Information System (INIS)
Yue, Chaoyang; Li, Loretta Y.
2013-01-01
Physicochemical properties of PBDE congeners are important for modeling their transport, but data are often missing. The quantitative structure–property relationship (QSPR) approach is utilized to fill this gap. Individual research groups often report piecemeal properties through experimental measurements or estimation techniques, but these data seldom satisfy fundamental thermodynamic relationships because of errors. The data then lack internal consistency and cannot be used directly in environmental modeling. This paper critically reviews published experimental data to select the best QSPR models, which are then extended to all 209 PBDE congeners. Properties include aqueous solubility, vapor pressure, Henry's law constant, octanol–water partition coefficient and octanol–air partition coefficient. Their values are next adjusted to satisfy fundamental thermodynamic equations. The resulting values then take advantage of all measurements and provide quick references for modeling and PBDE-contaminated site assessment and remediation. PCBs are also compared with respect to their properties and estimation methods. -- Highlights: •Property data of PBDEs and reported experimental and estimation methods were reviewed. •Missing data were estimated for all 209 PBDEs based on selected methods. •All data were adjusted to meet thermodynamic constrains using a VisualBasic program. •The established database provides a quick reference for PBDE environmental modeling. -- Through careful selection of literature data, structure–property estimation and adjustment, key properties of 209 PBDE congeners are estimated with internal consistency
Energy Technology Data Exchange (ETDEWEB)
Balci, Murat [Dept. of Mechanical Engineering, Bayburt University, Bayburt (Turkmenistan); Gundogdu, Omer [Dept. of Mechanical Engineering, Ataturk University, Erzurum (Turkmenistan)
2017-01-15
In this study, estimation of some physical properties of a laminated composite plate was conducted via the inverse vibration problem. Laminated composite plate was modelled and simulated to obtain vibration responses for different length-to-thickness ratio in ANSYS. Furthermore, a numerical finite element model was developed for the laminated composite utilizing the Kirchhoff plate theory and programmed in MATLAB for simulations. Optimizing the difference between these two vibration responses, inverse vibration problem was solved to obtain some of the physical properties of the laminated composite using genetic algorithms. The estimated parameters are compared with the theoretical results, and a very good correspondence was observed.
International Nuclear Information System (INIS)
Balci, Murat; Gundogdu, Omer
2017-01-01
In this study, estimation of some physical properties of a laminated composite plate was conducted via the inverse vibration problem. Laminated composite plate was modelled and simulated to obtain vibration responses for different length-to-thickness ratio in ANSYS. Furthermore, a numerical finite element model was developed for the laminated composite utilizing the Kirchhoff plate theory and programmed in MATLAB for simulations. Optimizing the difference between these two vibration responses, inverse vibration problem was solved to obtain some of the physical properties of the laminated composite using genetic algorithms. The estimated parameters are compared with the theoretical results, and a very good correspondence was observed
Joint Pitch and DOA Estimation Using the ESPRIT method
DEFF Research Database (Denmark)
Wu, Yuntao; Amir, Leshem; Jensen, Jesper Rindom
2015-01-01
In this paper, the problem of joint multi-pitch and direction-of-arrival (DOA) estimation for multi-channel harmonic sinusoidal signals is considered. A spatio-temporal matrix signal model for a uniform linear array is defined, and then the ESPRIT method based on subspace techniques that exploits...... the invariance property in the time domain is first used to estimate the multi pitch frequencies of multiple harmonic signals. Followed by the estimated pitch frequencies, the DOA estimations based on the ESPRIT method are also presented by using the shift invariance structure in the spatial domain. Compared...... to the existing stateof-the-art algorithms, the proposed method based on ESPRIT without 2-D searching is computationally more efficient but performs similarly. An asymptotic performance analysis of the DOA and pitch estimation of the proposed method are also presented. Finally, the effectiveness of the proposed...
Local scattering property scales flow speed estimation in laser speckle contrast imaging
International Nuclear Information System (INIS)
Miao, Peng; Chao, Zhen; Feng, Shihan; Ji, Yuanyuan; Yu, Hang; Thakor, Nitish V; Li, Nan
2015-01-01
Laser speckle contrast imaging (LSCI) has been widely used in in vivo blood flow imaging. However, the effect of local scattering property (scattering coefficient µ s ) on blood flow speed estimation has not been well investigated. In this study, such an effect was quantified and involved in relation between speckle autocorrelation time τ c and flow speed v based on simulation flow experiments. For in vivo blood flow imaging, an improved estimation strategy was developed to eliminate the estimation bias due to the inhomogeneous distribution of the scattering property. Compared to traditional LSCI, a new estimation method significantly suppressed the imaging noise and improves the imaging contrast of vasculatures. Furthermore, the new method successfully captured the blood flow changes and vascular constriction patterns in rats’ cerebral cortex from normothermia to mild and moderate hypothermia. (letter)
Prediction of Solvent Physical Properties using the Hierarchical Clustering Method
Recently a QSAR (Quantitative Structure Activity Relationship) method, the hierarchical clustering method, was developed to estimate acute toxicity values for large, diverse datasets. This methodology has now been applied to the estimate solvent physical properties including sur...
Energy Technology Data Exchange (ETDEWEB)
Baldacci, F.; Delaire, F.; Letang, J.M.; Sarrut, D.; Smekens, F.; Freud, N. [Lyon-1 Univ. - CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Centre Leon Berard (France); Mittone, A.; Coan, P. [LMU Munich (Germany). Dept. of Physics; LMU Munich (Germany). Faculty of Medicine; Bravin, A.; Ferrero, C. [European Synchrotron Radiation Facility, Grenoble (France); Gasilov, S. [LMU Munich (Germany). Dept. of Physics
2015-05-01
The track length estimator (TLE) method, an 'on-the-fly' fluence tally in Monte Carlo (MC) simulations, recently implemented in GATE 6.2, is known as a powerful tool to accelerate dose calculations in the domain of low-energy X-ray irradiations using the kerma approximation. Overall efficiency gains of the TLE with respect to analogous MC were reported in the literature for regions of interest in various applications (photon beam radiation therapy, X-ray imaging). The behaviour of the TLE method in terms of statistical properties, dose deposition patterns, and computational efficiency compared to analogous MC simulations was investigated. The statistical properties of the dose deposition were first assessed. Derivations of the variance reduction factor of TLE versus analogous MC were carried out, starting from the expression of the dose estimate variance in the TLE and analogous MC schemes. Two test cases were chosen to benchmark the TLE performance in comparison with analogous MC: (i) a small animal irradiation under stereotactic synchrotron radiation therapy conditions and (ii) the irradiation of a human pelvis during a cone beam computed tomography acquisition. Dose distribution patterns and efficiency gain maps were analysed. The efficiency gain exhibits strong variations within a given irradiation case, depending on the geometrical (voxel size, ballistics) and physical (material and beam properties) parameters on the voxel scale. Typical values lie between 10 and 103, with lower levels in dense regions (bone) outside the irradiated channels (scattered dose only), and higher levels in soft tissues directly exposed to the beams.
Ridge regression estimator: combining unbiased and ordinary ridge regression methods of estimation
Directory of Open Access Journals (Sweden)
Sharad Damodar Gore
2009-10-01
Full Text Available Statistical literature has several methods for coping with multicollinearity. This paper introduces a new shrinkage estimator, called modified unbiased ridge (MUR. This estimator is obtained from unbiased ridge regression (URR in the same way that ordinary ridge regression (ORR is obtained from ordinary least squares (OLS. Properties of MUR are derived. Results on its matrix mean squared error (MMSE are obtained. MUR is compared with ORR and URR in terms of MMSE. These results are illustrated with an example based on data generated by Hoerl and Kennard (1975.
The aim of this work is to develop group-contribution+ (GC+) method (combined group-contribution (GC) method and atom connectivity index (CI) method) based property models to provide reliable estimations of environment-related properties of organic chemicals together with uncert...
Estimating aquifer properties from the water level response to Earth tides.
Cutillo, Paula A; Bredehoeft, John D
2011-01-01
Water level fluctuations induced by tidal strains can be analyzed to estimate the elastic properties, porosity, and transmissivity of the surrounding aquifer material. We review underutilized methods for estimating aquifer properties from the confined response to earth tides. The earth tide analyses are applied to an open well penetrating a confined carbonate aquifer. The resulting range of elastic and hydraulic aquifer properties are in general agreement with that determined by other investigators for the area of the well. The analyses indicate that passive monitoring data from wells completed in sufficiently stiff, low porosity formations can provide useful information on the properties of the surrounding formation. Journal compilation © 2010 National Ground Water Association. No claim to original US government works.
Regressional Estimation of Cotton Sirospun Yarn Properties from Fibre Properties
Directory of Open Access Journals (Sweden)
Bedez Üte Tuba
2014-09-01
Full Text Available In this paper, it is aimed at determining the equations and models for estimating the sirospun yarn quality characteristics from the yarn production parameters and cotton fibre properties, which are focused on fibre bundle measurements represented by HVI (high volume instrument. For this purpose, a total of 270 sirospun yarn samples were produced on the same ring spinning machine under the same conditions at Ege University, by using 11 different cotton blends and three different strand spacing settings, in four different yarn counts and in three different twist coefficients. The sirospun yarn and cotton fibre property interactions were investigated by correlation analysis. For the prediction of yarn quality characteristics, multivariate linear regression methods were performed. As a result of the study, equations were generated for the prediction of yarn tenacity, breaking elongation, unevenness and hairiness by using fibre and yarn properties. After the goodness of fit statistics, very large determination coefficients (R2 and adjusted R2 were observed.
A Systematic Identification Method for Thermodynamic Property Modelling
DEFF Research Database (Denmark)
Ana Perederic, Olivia; Cunico, Larissa; Sarup, Bent
2017-01-01
In this work, a systematic identification method for thermodynamic property modelling is proposed. The aim of the method is to improve the quality of phase equilibria prediction by group contribution based property prediction models. The method is applied to lipid systems where the Original UNIFAC...... model is used. Using the proposed method for estimating the interaction parameters using only VLE data, a better phase equilibria prediction for both VLE and SLE was obtained. The results were validated and compared with the original model performance...
Underwater Inherent Optical Properties Estimation Using a Depth Aided Deep Neural Network
Directory of Open Access Journals (Sweden)
Zhibin Yu
2017-01-01
Full Text Available Underwater inherent optical properties (IOPs are the fundamental clues to many research fields such as marine optics, marine biology, and underwater vision. Currently, beam transmissometers and optical sensors are considered as the ideal IOPs measuring methods. But these methods are inflexible and expensive to be deployed. To overcome this problem, we aim to develop a novel measuring method using only a single underwater image with the help of deep artificial neural network. The power of artificial neural network has been proved in image processing and computer vision fields with deep learning technology. However, image-based IOPs estimation is a quite different and challenging task. Unlike the traditional applications such as image classification or localization, IOP estimation looks at the transparency of the water between the camera and the target objects to estimate multiple optical properties simultaneously. In this paper, we propose a novel Depth Aided (DA deep neural network structure for IOPs estimation based on a single RGB image that is even noisy. The imaging depth information is considered as an aided input to help our model make better decision.
Underwater Inherent Optical Properties Estimation Using a Depth Aided Deep Neural Network.
Yu, Zhibin; Wang, Yubo; Zheng, Bing; Zheng, Haiyong; Wang, Nan; Gu, Zhaorui
2017-01-01
Underwater inherent optical properties (IOPs) are the fundamental clues to many research fields such as marine optics, marine biology, and underwater vision. Currently, beam transmissometers and optical sensors are considered as the ideal IOPs measuring methods. But these methods are inflexible and expensive to be deployed. To overcome this problem, we aim to develop a novel measuring method using only a single underwater image with the help of deep artificial neural network. The power of artificial neural network has been proved in image processing and computer vision fields with deep learning technology. However, image-based IOPs estimation is a quite different and challenging task. Unlike the traditional applications such as image classification or localization, IOP estimation looks at the transparency of the water between the camera and the target objects to estimate multiple optical properties simultaneously. In this paper, we propose a novel Depth Aided (DA) deep neural network structure for IOPs estimation based on a single RGB image that is even noisy. The imaging depth information is considered as an aided input to help our model make better decision.
DEFF Research Database (Denmark)
Kontogeorgis, Georgios; Ioannis, Smirlis; Iakovos, Yakoumis
1997-01-01
S. The proposed scheme employs a recent group-contribution method (Constantinou et al. Fluid Phase Equilib. 1995, 103 (1), 11) for estimating the acentric factor. The two critical properties are estimated via a generalized correlation for the ratio T-c/P-c (with the van der Waals surface area) and the cubic Eo...... pressures for several nonpolar and slightly polar heavy compounds with very satisfactory results, essentially independent of the experimental point used. Furthermore, the method yields critical properties for heavy alkanes (N-c > 20) and other compounds which are in very good agreement with recent available......Cubic equations of state (EoS) are often used for correlating and predicting phase equilibria. Before extending any EoS to mixtures, reliable vapor-pressure prediction is essential. This requires experimental, if possible, critical temperatures T-c, pressures P-c, and acentric factor omega...
Oxygen transport properties estimation by DSMC-CT simulations
Energy Technology Data Exchange (ETDEWEB)
Bruno, Domenico [Istituto di Metodologie Inorganiche e dei Plasmi, Consiglio Nazionale delle Ricerche - Via G. Amendola, 122 - 70125 Bari (Italy); Frezzotti, Aldo; Ghiroldi, Gian Pietro [Dipartimento di Scienze e Tecnologie Aerospaziali, Politecnico di Milano - Via La Masa, 34 - 20156 Milano (Italy)
2014-12-09
Coupling DSMC simulations with classical trajectories calculations is emerging as a powerful tool to improve predictive capabilities of computational rarefied gas dynamics. The considerable increase of computational effort outlined in the early application of the method (Koura,1997) can be compensated by running simulations on massively parallel computers. In particular, GPU acceleration has been found quite effective in reducing computing time (Ferrigni,2012; Norman et al.,2013) of DSMC-CT simulations. The aim of the present work is to study rarefied Oxygen flows by modeling binary collisions through an accurate potential energy surface, obtained by molecular beams scattering (Aquilanti, et al.,1999). The accuracy of the method is assessed by calculating molecular Oxygen shear viscosity and heat conductivity following three different DSMC-CT simulation methods. In the first one, transport properties are obtained from DSMC-CT simulations of spontaneous fluctuation of an equilibrium state (Bruno et al, Phys. Fluids, 23, 093104, 2011). In the second method, the collision trajectory calculation is incorporated in a Monte Carlo integration procedure to evaluate the Taxman’s expressions for the transport properties of polyatomic gases (Taxman,1959). In the third, non-equilibrium zero and one-dimensional rarefied gas dynamic simulations are adopted and the transport properties are computed from the non-equilibrium fluxes of momentum and energy. The three methods provide close values of the transport properties, their estimated statistical error not exceeding 3%. The experimental values are slightly underestimated, the percentage deviation being, again, few percent.
Estimation of the thermal properties in alloys as an inverse problem
International Nuclear Information System (INIS)
Zueco, J.; Alhama, F.
2005-01-01
This paper provides an efficient numerical method for estimating the thermal conductivity and heat capacity of alloys, as a function of the temperature, starting from temperature measurements (including errors) in heating and cooling processes. The proposed procedure is a modification of the known function estimation technique, typical of the inverse problem field, in conjunction with the network simulation method (already checked in many non-lineal problems) as the numerical tool. Estimations only require a point of measurement. The methodology is applied for determining these thermal properties in alloys within ranges of temperature where allotropic changes take place. These changes are characterized by sharp temperature dependencies. (Author) 13 refs
New methods of testing nonlinear hypothesis using iterative NLLS estimator
Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.
2017-11-01
This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.
Estimation of Properties of Pure Organic Substances with Group and Pair Contributions
Directory of Open Access Journals (Sweden)
J.E.S. Ourique
1997-06-01
Full Text Available ABSTRACTbstract - This work presents a new predictive method for the estimation of properties of pure organic substances. Each compound is assigned a molecular graph or an adjacency matrix representing its chemical structure, from which properties are then obtained as a summation of all contributions associated with functional groups and chemically bonded pairs of groups. The proposed technique is applied to the estimation of critical temperature, critical pressure, critical volume and normal boiling point of 325 organic compounds from different chemical species. Accurate predictions based solely on chemical structure are obtained
Directory of Open Access Journals (Sweden)
Federico Scarpa
2015-01-01
Full Text Available The identification of thermophysical properties of materials in dynamic experiments can be conveniently performed by the inverse solution of the associated heat conduction problem (IHCP. The inverse technique demands the knowledge of the initial temperature distribution within the material. As only a limited number of temperature sensors (or no sensor at all are arranged inside the test specimen, the knowledge of the initial temperature distribution is affected by some uncertainty. This uncertainty, together with other possible sources of bias in the experimental procedure, will propagate in the estimation process and the accuracy of the reconstructed thermophysical property values could deteriorate. In this work the effect on the estimated thermophysical properties due to errors in the initial temperature distribution is investigated along with a practical method to quantify this effect. Furthermore, a technique for compensating this kind of bias is proposed. The method consists in including the initial temperature distribution among the unknown functions to be estimated. In this way the effect of the initial bias is removed and the accuracy of the identified thermophysical property values is highly improved.
The Burr X Pareto Distribution: Properties, Applications and VaR Estimation
Directory of Open Access Journals (Sweden)
Mustafa Ç. Korkmaz
2017-12-01
Full Text Available In this paper, a new three-parameter Pareto distribution is introduced and studied. We discuss various mathematical and statistical properties of the new model. Some estimation methods of the model parameters are performed. Moreover, the peaks-over-threshold method is used to estimate Value-at-Risk (VaR by means of the proposed distribution. We compare the distribution with a few other models to show its versatility in modelling data with heavy tails. VaR estimation with the Burr X Pareto distribution is presented using time series data, and the new model could be considered as an alternative VaR model against the generalized Pareto model for financial institutions.
Fundamental Frequency Estimation using Polynomial Rooting of a Subspace-Based Method
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2010-01-01
improvements compared to HMUSIC. First, by using the proposed method we can obtain an estimate of the fundamental frequency without doing a grid search like in HMUSIC. This is due to that the fundamental frequency is estimated as the argument of the root lying closest to the unit circle. Second, we obtain...... a higher spectral resolution compared to HMUSIC which is a property of polynomial rooting methods. Our simulation results show that the proposed method is applicable to real-life signals, and that we in most cases obtain a higher spectral resolution than HMUSIC....
DEFF Research Database (Denmark)
Hukkerikar, Amol; Kalakul, Sawitree; Sarup, Bent
2012-01-01
The aim of this work is to develop group-3 contribution+ (GC+)method (combined group-contribution (GC) method and atom connectivity index (CI)) based 15 property models to provide reliable estimations of environment-related properties of organic chemicals together with uncertainties of estimated...... property values. For this purpose, a systematic methodology for property modeling and uncertainty analysis is used. The methodology includes a parameter estimation step to determine parameters of property models and an uncertainty analysis step to establish statistical information about the quality......, poly functional chemicals, etc.) taken from the database of the US Environmental Protection Agency (EPA) and from the database of USEtox is used. For property modeling and uncertainty analysis, the Marrero and Gani GC method and atom connectivity index method have been considered. In total, 22...
Dual ant colony operational modal analysis parameter estimation method
Sitarz, Piotr; Powałka, Bartosz
2018-01-01
Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.
Estimation of in-situ joint properties, Naesliden Mine
Energy Technology Data Exchange (ETDEWEB)
Barton, N.
1980-05-15
Finite element modelling of jointed rock masses requires detailed input data concerning the mechanical behaviour of the relevant joint sets. In the case of the Naesliden project, the properties of the footwall and hanging wall contacts were of particular concern because of their planarity. Methods of estimating the full-scale shear strength and shear stiffness are summarized. The estimates are based on assessment of full-scale values of the joint roughness coefficient (JRC), the joint wall compressive strength (JCS) and the residual friction angle. Sensitivity analyses indicate which of these parameters need to be determined with greatest accuracy at the levels of normal stress of interest. The full-scale estimates are compared with laboratory scale data and with data obtained from small scale tilt tests and tests on model tension fractures. A scale effect makes direct application of laboratory data of doubtful value. A simple dimensionless shear force-displacement formulation is suggested that describes the mobilization and subsequent reduction of joint roughness, as peak strength is exceeded during a given shearing event. The effect of changing normal stress during shearing is accounted for using this method.
On the method of logarithmic cumulants for parametric probability density function estimation.
Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane
2013-10-01
Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible.
A Bayes linear Bayes method for estimation of correlated event rates.
Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim
2013-12-01
Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.
Weak Properties and Robustness of t-Hill Estimators
Czech Academy of Sciences Publication Activity Database
Jordanova, P.; Fabián, Zdeněk; Hermann, P.; Střelec, L.; Rivera, A.; Girard, S.; Torres, S.; Stehlík, M.
2016-01-01
Roč. 19, č. 4 (2016), s. 591-626 ISSN 1386-1999 Institutional support: RVO:67985807 Keywords : asymptotic properties of estimators * point estimation * t-Hill estimator * t-lgHill estimator Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.679, year: 2016
Energy Technology Data Exchange (ETDEWEB)
NONE
1996-03-01
The Power Reactor and Nuclear Fuel Development Corporation (PRNFDC) and the Institute of Resources and Environment Technology (IRET) of Agency of Industrial Science and Technology conducted a cooperative research on a testing study based on in-door fundamental examination on Acoustic emission (AE) original position measuring tester to catch property change due to micro elastic wave formed with generation and propagation of rock deformation and fracture and AE measurement and specific resistance tomography under 3 years plan from 1989 fiscal year. In 1994 fiscal year, survey on the specific resistance tomography for pre-research on tunnel excavation effect test and experiment on the specific resistance measurement at in-door scale for fundamental study of estimation method were conducted. And, in 1995 fiscal year, by laying a main point of the experiment at in-door scale, the estimation and investigation on the excavation effect estimation method on a base of the past study results on the fundamental experiment results were conducted. In this paper, these experiment results conducted at IRET and PRNFDC in 1994 and 1995 fiscal years were reported. (G.K.)
Asiri, Sharefa M.
2017-10-08
Partial Differential Equations (PDEs) are commonly used to model complex systems that arise for example in biology, engineering, chemistry, and elsewhere. The parameters (or coefficients) and the source of PDE models are often unknown and are estimated from available measurements. Despite its importance, solving the estimation problem is mathematically and numerically challenging and especially when the measurements are corrupted by noise, which is often the case. Various methods have been proposed to solve estimation problems in PDEs which can be classified into optimization methods and recursive methods. The optimization methods are usually heavy computationally, especially when the number of unknowns is large. In addition, they are sensitive to the initial guess and stop condition, and they suffer from the lack of robustness to noise. Recursive methods, such as observer-based approaches, are limited by their dependence on some structural properties such as observability and identifiability which might be lost when approximating the PDE numerically. Moreover, most of these methods provide asymptotic estimates which might not be useful for control applications for example. An alternative non-asymptotic approach with less computational burden has been proposed in engineering fields based on the so-called modulating functions. In this dissertation, we propose to mathematically and numerically analyze the modulating functions based approaches. We also propose to extend these approaches to different situations. The contributions of this thesis are as follows. (i) Provide a mathematical analysis of the modulating function-based method (MFBM) which includes: its well-posedness, statistical properties, and estimation errors. (ii) Provide a numerical analysis of the MFBM through some estimation problems, and study the sensitivity of the method to the modulating functions\\' parameters. (iii) Propose an effective algorithm for selecting the method\\'s design parameters
Conventional estimating method of earthquake response of mechanical appendage system
International Nuclear Information System (INIS)
Aoki, Shigeru; Suzuki, Kohei
1981-01-01
Generally, for the estimation of the earthquake response of appendage structure system installed in main structure system, the method of floor response analysis using the response spectra at the point of installing the appendage system has been used. On the other hand, the research on the estimation of the earthquake response of appendage system by the statistical procedure based on probability process theory has been reported. The development of a practical method for simply estimating the response is an important subject in aseismatic engineering. In this study, the method of estimating the earthquake response of appendage system in the general case that the natural frequencies of both structure systems were different was investigated. First, it was shown that floor response amplification factor was able to be estimated simply by giving the ratio of the natural frequencies of both structure systems, and its statistical property was clarified. Next, it was elucidated that the procedure of expressing acceleration, velocity and displacement responses with tri-axial response spectra simultaneously was able to be applied to the expression of FRAF. The applicability of this procedure to nonlinear system was examined. (Kako, I.)
Del Pico, Wayne J
2014-01-01
Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el
Fang, Yun; Wu, Hulin; Zhu, Li-Xing
2011-07-01
We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.
Seasonal adjustment methods and real time trend-cycle estimation
Bee Dagum, Estela
2016-01-01
This book explores widely used seasonal adjustment methods and recent developments in real time trend-cycle estimation. It discusses in detail the properties and limitations of X12ARIMA, TRAMO-SEATS and STAMP - the main seasonal adjustment methods used by statistical agencies. Several real-world cases illustrate each method and real data examples can be followed throughout the text. The trend-cycle estimation is presented using nonparametric techniques based on moving averages, linear filters and reproducing kernel Hilbert spaces, taking recent advances into account. The book provides a systematical treatment of results that to date have been scattered throughout the literature. Seasonal adjustment and real time trend-cycle prediction play an essential part at all levels of activity in modern economies. They are used by governments to counteract cyclical recessions, by central banks to control inflation, by decision makers for better modeling and planning and by hospitals, manufacturers, builders, transportat...
Estimating surface acoustic impedance with the inverse method.
Piechowicz, Janusz
2011-01-01
Sound field parameters are predicted with numerical methods in sound control systems, in acoustic designs of building and in sound field simulations. Those methods define the acoustic properties of surfaces, such as sound absorption coefficients or acoustic impedance, to determine boundary conditions. Several in situ measurement techniques were developed; one of them uses 2 microphones to measure direct and reflected sound over a planar test surface. Another approach is used in the inverse boundary elements method, in which estimating acoustic impedance of a surface is expressed as an inverse boundary problem. The boundary values can be found from multipoint sound pressure measurements in the interior of a room. This method can be applied to arbitrarily-shaped surfaces. This investigation is part of a research programme on using inverse methods in industrial room acoustics.
Estimation of water percolation by different methods using TDR
Directory of Open Access Journals (Sweden)
Alisson Jadavi Pereira da Silva
2014-02-01
Full Text Available Detailed knowledge on water percolation into the soil in irrigated areas is fundamental for solving problems of drainage, pollution and the recharge of underground aquifers. The aim of this study was to evaluate the percolation estimated by time-domain-reflectometry (TDR in a drainage lysimeter. We used Darcy's law with K(θ functions determined by field and laboratory methods and by the change in water storage in the soil profile at 16 points of moisture measurement at different time intervals. A sandy clay soil was saturated and covered with plastic sheet to prevent evaporation and an internal drainage trial in a drainage lysimeter was installed. The relationship between the observed and estimated percolation values was evaluated by linear regression analysis. The results suggest that percolation in the field or laboratory can be estimated based on continuous monitoring with TDR, and at short time intervals, of the variations in soil water storage. The precision and accuracy of this approach are similar to those of the lysimeter and it has advantages over the other evaluated methods, of which the most relevant are the possibility of estimating percolation in short time intervals and exemption from the predetermination of soil hydraulic properties such as water retention and hydraulic conductivity. The estimates obtained by the Darcy-Buckingham equation for percolation levels using function K(θ predicted by the method of Hillel et al. (1972 provided compatible water percolation estimates with those obtained in the lysimeter at time intervals greater than 1 h. The methods of Libardi et al. (1980, Sisson et al. (1980 and van Genuchten (1980 underestimated water percolation.
Sparse and shrunken estimates of MRI networks in the brain and their influence on network properties
DEFF Research Database (Denmark)
Romero-Garcia, Rafael; Clemmensen, Line Katrine Harder
2014-01-01
approaches showed more stable results with a relative low variance at the expense of a little bias. Interestingly, topological properties as local and global efficiency estimated in networks constructed from traditional non-regularized correlations also showed higher variability when compared to those from...... regularized networks. Our findings suggest that a population-based connectivity study can achieve a more robust description of cortical topology through regularization of the correlation estimates. Four regularization methods were examined: Two with shrinkage (Ridge and Schäfer’s shrinkage), one with sparsity...... (Lasso) and one with both shrinkage and sparsity (Elastic net). Furthermore, the different regularizations resulted in different correlation estimates as well as network properties. The shrunken estimates resulted in lower variance of the estimates than the sparse estimates....
DEFF Research Database (Denmark)
Frutiger, Jerome; Marcarie, Camille; Abildskov, Jens
2016-01-01
regression and outlier treatment have been applied to achieve high accuracy. Furthermore, linear error propagation based on covariance matrix of estimated parameters was performed. Therefore, every estimated property value of the flammability-related properties is reported together with its corresponding 95......%-confidence interval of the prediction. Compared to existing models the developed ones have a higher accuracy, are simple to apply and provide uncertainty information on the calculated prediction. The average relative error and correlation coefficient are 11.5% and 0.99 for LFL, 15.9% and 0.91 for UFL, 2...
International Nuclear Information System (INIS)
Shultis, J.K.; Buranapan, W.; Eckhoff, N.D.
1981-12-01
Of considerable importance in the safety analysis of nuclear power plants are methods to estimate the probability of failure-on-demand, p, of a plant component that normally is inactive and that may fail when activated or stressed. Properties of five methods for estimating from failure-on-demand data the parameters of the beta prior distribution in a compound beta-binomial probability model are examined. Simulated failure data generated from a known beta-binomial marginal distribution are used to estimate values of the beta parameters by (1) matching moments of the prior distribution to those of the data, (2) the maximum likelihood method based on the prior distribution, (3) a weighted marginal matching moments method, (4) an unweighted marginal matching moments method, and (5) the maximum likelihood method based on the marginal distribution. For small sample sizes (N = or < 10) with data typical of low failure probability components, it was found that the simple prior matching moments method is often superior (e.g. smallest bias and mean squared error) while for larger sample sizes the marginal maximum likelihood estimators appear to be best
Estimating building energy consumption using extreme learning machine method
International Nuclear Information System (INIS)
Naji, Sareh; Keivani, Afram; Shamshirband, Shahaboddin; Alengaram, U. Johnson; Jumaat, Mohd Zamin; Mansor, Zulkefli; Lee, Malrey
2016-01-01
The current energy requirements of buildings comprise a large percentage of the total energy consumed around the world. The demand of energy, as well as the construction materials used in buildings, are becoming increasingly problematic for the earth's sustainable future, and thus have led to alarming concern. The energy efficiency of buildings can be improved, and in order to do so, their operational energy usage should be estimated early in the design phase, so that buildings are as sustainable as possible. An early energy estimate can greatly help architects and engineers create sustainable structures. This study proposes a novel method to estimate building energy consumption based on the ELM (Extreme Learning Machine) method. This method is applied to building material thicknesses and their thermal insulation capability (K-value). For this purpose up to 180 simulations are carried out for different material thicknesses and insulation properties, using the EnergyPlus software application. The estimation and prediction obtained by the ELM model are compared with GP (genetic programming) and ANNs (artificial neural network) models for accuracy. The simulation results indicate that an improvement in predictive accuracy is achievable with the ELM approach in comparison with GP and ANN. - Highlights: • Buildings consume huge amounts of energy for operation. • Envelope materials and insulation influence building energy consumption. • Extreme learning machine is used to estimate energy usage of a sample building. • The key effective factors in this study are insulation thickness and K-value.
Approach suitable for screening estimation methods for critical properties of heavy compounds
DEFF Research Database (Denmark)
Zbogar, A.; vidal da Silva Lopes, F.; Kontogeorgis, Georgios
2006-01-01
) is in agreement with their validation via the proposed equation. Similar results have been obtained for other compounds. Both group-contribution methods are of equal accuracy for heavy alkenes and acids, provided that experimental boiling point temperatures are available for the Joback method. If such data...... the very heavy and complex ones, follow the trend suggested by this equation. This equation can be used for testing existing group-contribution estimation methods. It is shown here that direct comparison of the Joback and Constantinou-Gani methods for two families of compounds (alkenes and carboxylic acids......- and diacids, alkenes, cyclo/phenylalkanes, and squalane). This and previous validations verify that this correlation has a much broader application range, up to (T-c/p(c)) ratios of 200, than the data used in its development (compounds with ratios up to 100). it seems that most organic compounds, including...
Scott, Elaine P.
1994-01-01
Thermal stress analyses are an important aspect in the development of aerospace vehicles at NASA-LaRC. These analyses require knowledge of the temperature distributions within the vehicle structures which consequently necessitates the need for accurate thermal property data. The overall goal of this ongoing research effort is to develop methodologies for the estimation of the thermal property data needed to describe the temperature responses of these complex structures. The research strategy undertaken utilizes a building block approach. The idea here is to first focus on the development of property estimation methodologies for relatively simple conditions, such as isotropic materials at constant temperatures, and then systematically modify the technique for the analysis of more and more complex systems, such as anisotropic multi-component systems. The estimation methodology utilized is a statistically based method which incorporates experimental data and a mathematical model of the system. Several aspects of this overall research effort were investigated during the time of the ASEE summer program. One important aspect involved the calibration of the estimation procedure for the estimation of the thermal properties through the thickness of a standard material. Transient experiments were conducted using a Pyrex standard at various temperatures, and then the thermal properties (thermal conductivity and volumetric heat capacity) were estimated at each temperature. Confidence regions for the estimated values were also determined. These results were then compared to documented values. Another set of experimental tests were conducted on carbon composite samples at different temperatures. Again, the thermal properties were estimated for each temperature, and the results were compared with values obtained using another technique. In both sets of experiments, a 10-15 percent off-set between the estimated values and the previously determined values was found. Another effort
Unrecorded Alcohol Consumption: Quantitative Methods of Estimation
Razvodovsky, Y. E.
2010-01-01
unrecorded alcohol; methods of estimation In this paper we focused on methods of estimation of unrecorded alcohol consumption level. Present methods of estimation of unrevorded alcohol consumption allow only approximate estimation of unrecorded alcohol consumption level. Tacking into consideration the extreme importance of such kind of data, further investigation is necessary to improve the reliability of methods estimation of unrecorded alcohol consumption.
An empirical method to estimate bulk particulate refractive index for ocean satellite applications
Digital Repository Service at National Institute of Oceanography (India)
Suresh, T.; Desa, E.; Mascarenhas, A.A.M.Q.; Matondkar, S.G.P.; Naik, P.; Nayak, S.R.
An empirical method is presented here to estimates bulk particulate refractive index using the measured inherent and apparent optical properties from the various waters types of the Arabian Sea. The empirical model, where the bulk refractive index...
Estimation of thermophysical properties in the system Li-Pb
International Nuclear Information System (INIS)
Jauch, U.; Schulz, B.
1986-01-01
Based on the phase diagram and the knowledge of thermophysical properties data of alloys and intermetallic compounds in the Li-Pb system, quantitative relationships between several properties and between the properties in solid and liquid state are used: to interpret the results on thermophysical properties in the quasibinary system LiPb-Pb and to estimate unknown properties in the concentration range 100 > Li (at.%) > 50. (orig.)
Rock mass mechanical property estimations for the Yucca Mountain Site Characterization Project
International Nuclear Information System (INIS)
Lin, M.; Hardy, M.P.; Bauer, S.J.
1993-06-01
Rock mass mechanical properties are important in the design of drifts and ramps. These properties are used in evaluations of the impacts of thermomechanical loading of potential host rock within the Yucca Mountain Site Characterization Project. Representative intact rock and joint mechanical properties were selected for welded and nonwelded tuffs from the currently available data sources. Rock mass qualities were then estimated using both the Norwegian Geotechnical Institute (Q) and Geomechanics Rating (RMR) systems. Rock mass mechanical properties were developed based on estimates of rock mass quality, the current knowledge of intact properties, and fracture/joint characteristics. Empirical relationships developed to correlate the rock mass quality indices and the rock mass mechanical properties were then used to estimate the range of rock mass mechanical properties
Estimating perception of scene layout properties from global image features.
Ross, Michael G; Oliva, Aude
2010-01-08
The relationship between image features and scene structure is central to the study of human visual perception and computer vision, but many of the specifics of real-world layout perception remain unknown. We do not know which image features are relevant to perceiving layout properties, or whether those features provide the same information for every type of image. Furthermore, we do not know the spatial resolutions required for perceiving different properties. This paper describes an experiment and a computational model that provides new insights on these issues. Humans perceive the global spatial layout properties such as dominant depth, openness, and perspective, from a single image. This work describes an algorithm that reliably predicts human layout judgments. This model's predictions are general, not specific to the observers it trained on. Analysis reveals that the optimal spatial resolutions for determining layout vary with the content of the space and the property being estimated. Openness is best estimated at high resolution, depth is best estimated at medium resolution, and perspective is best estimated at low resolution. Given the reliability and simplicity of estimating the global layout of real-world environments, this model could help resolve perceptual ambiguities encountered by more detailed scene reconstruction schemas.
A method to estimate stellar ages from kinematical data
Almeida-Fernandes, F.; Rocha-Pinto, H. J.
2018-05-01
We present a method to build a probability density function (PDF) for the age of a star based on its peculiar velocities U, V, and W and its orbital eccentricity. The sample used in this work comes from the Geneva-Copenhagen Survey (GCS) that contains the spatial velocities, orbital eccentricities, and isochronal ages for about 14 000 stars. Using the GCS stars, we fitted the parameters that describe the relations between the distributions of kinematical properties and age. This parametrization allows us to obtain an age probability from the kinematical data. From this age PDF, we estimate an individual average age for the star using the most likely age and the expected age. We have obtained the stellar age PDF for the age of 9102 stars from the GCS and have shown that the distribution of individual ages derived from our method is in good agreement with the distribution of isochronal ages. We also observe a decline in the mean metallicity with our ages for stars younger than 7 Gyr, similar to the one observed for isochronal ages. This method can be useful for the estimation of rough stellar ages for those stars that fall in areas of the Hertzsprung-Russell diagram where isochrones are tightly crowded. As an example of this method, we estimate the age of Trappist-1, which is a M8V star, obtaining the age of t(UVW) = 12.50(+0.29 - 6.23) Gyr.
Cross-property relations and permeability estimation in model porous media
International Nuclear Information System (INIS)
Schwartz, L.M.; Martys, N.; Bentz, D.P.; Garboczi, E.J.; Torquato, S.
1993-01-01
Results from a numerical study examining cross-property relations linking fluid permeability to diffusive and electrical properties are presented. Numerical solutions of the Stokes equations in three-dimensional consolidated granular packings are employed to provide a basis of comparison between different permeability estimates. Estimates based on the Λ parameter (a length derived from electrical conduction) and on d c (a length derived from immiscible displacement) are found to be considerably more reliable than estimates based on rigorous permeability bounds related to pore space diffusion. We propose two hybrid relations based on diffusion which provide more accurate estimates than either of the rigorous permeability bounds
Directory of Open Access Journals (Sweden)
I. N. Alexandrov
2011-01-01
Full Text Available Various intellectual property (IP estimation approaches and innovations in this field are discussed. Problem situations and «bottlenecks» in the economic mechanism of transformation of innovations into useful products and services are defined. Main international IP evaluation methods are described, particular attention being paid to «Quick Inside» program defined as latest generation global expert system. IP income and expense evaluation methods used in domestic practice are discussed. Possibility of using the Black-Scholes optional model to estimate costs of non-material assets is studied.
Boundary methods for mode estimation
Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.
1999-08-01
This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).
Modified generalized method of moments for a robust estimation of polytomous logistic model
Directory of Open Access Journals (Sweden)
Xiaoshan Wang
2014-07-01
Full Text Available The maximum likelihood estimation (MLE method, typically used for polytomous logistic regression, is prone to bias due to both misclassification in outcome and contamination in the design matrix. Hence, robust estimators are needed. In this study, we propose such a method for nominal response data with continuous covariates. A generalized method of weighted moments (GMWM approach is developed for dealing with contaminated polytomous response data. In this approach, distances are calculated based on individual sample moments. And Huber weights are applied to those observations with large distances. Mellow-type weights are also used to downplay leverage points. We describe theoretical properties of the proposed approach. Simulations suggest that the GMWM performs very well in correcting contamination-caused biases. An empirical application of the GMWM estimator on data from a survey demonstrates its usefulness.
Estimation of soil properties and free product volume from baildown tests
International Nuclear Information System (INIS)
Zhu, J.L.; Parker, J.C.; Lundy, D.A.; Zimmerman, L.M.
1993-01-01
Baildown tests, involving measurement of water and free product levels in a monitoring well after bailing, are often performed at spill sites to estimate the oil volume per unit area -- which the authors refer to as ''oil specific volume.'' Spill volume is estimated by integrating oil specific volume over the areal domain of the spill. Existing methods for interpreting baildown tests are based on grossly simplistic approximations of soil capillary properties that cannot accurately describe the transient well response. A model for vertical equilibrium oil distributions based on the van Genuchten capillary model has been documented and verified in the laboratory and in the field by various authors. The model enables oil specific volume and oil transmissivity to be determined as functions of well product thickness. This paper describes a method for estimating van Genuchten capillary parameters, as well as aquifer hydraulic conductivity, from baildown tests. The results yield the relationships of oil specific volume and oil transmissivity to apparent product thickness, which may be used, in turn, to compute spill volume and to model free product plume movement and free product recovery. The method couples a finite element model for radial flow of oil and water to a well with a nonlinear parameter estimation algorithm. Effects of the filter pack around the well in the fluid level response are considered explicitly by the model. The method, which is implemented in the program BAILTEST, is applied to field data from baildown tests. The results indicate that hydrographs of water and oil levels are accurately described by the model
Estimation of Physical Properties for Hydrogen Isotopes Using Aspen Plus Simulator
International Nuclear Information System (INIS)
Cho, Jung Ho; Yun, Sei Hun; Cho, Seung Yon; Chang, Min Ho; Kang, Hyun Goo; Jung, Ki Jung; Kim, Dong Min
2009-01-01
Hydrogen isotopes are H 2 , HD, D 2 , H 2 , HD, D 2 , HT, DT and T 2 . Among the hydrogen isotopes, the physical properties of H2, HD and D+2 are included in the Aspen Plus, however HT, D T and T 2 are not included. In this study, various thermodynamic properties were estimated for six components of isotopes by use of the fixed properties and temperature-dependent properties. To estimate thermodynamic properties, Soave modified Redlich-Kwong equation of state and Aspenplus simulator was used. The results were verified and compared with by PRO/II with PROVISION of Invensys
Estimation of subcriticality of TCA using 'indirect estimation method for calculation error'
International Nuclear Information System (INIS)
Naito, Yoshitaka; Yamamoto, Toshihiro; Arakawa, Takuya; Sakurai, Kiyoshi
1996-01-01
To estimate the subcriticality of neutron multiplication factor in a fissile system, 'Indirect Estimation Method for Calculation Error' is proposed. This method obtains the calculational error of neutron multiplication factor by correlating measured values with the corresponding calculated ones. This method was applied to the source multiplication and to the pulse neutron experiments conducted at TCA, and the calculation error of MCNP 4A was estimated. In the source multiplication method, the deviation of measured neutron count rate distributions from the calculated ones estimates the accuracy of calculated k eff . In the pulse neutron method, the calculation errors of prompt neutron decay constants give the accuracy of the calculated k eff . (author)
Aquifer Recharge Estimation In Unsaturated Porous Rock Using Darcian And Geophysical Methods.
Nimmo, J. R.; De Carlo, L.; Masciale, R.; Turturro, A. C.; Perkins, K. S.; Caputo, M. C.
2016-12-01
Within the unsaturated zone a constant downward gravity-driven flux of water commonly exists at depths ranging from a few meters to tens of meters depending on climate, medium, and vegetation. In this case a steady-state application of Darcy's law can provide recharge rate estimates.We have applied an integrated approach that combines field geophysical measurements with laboratory hydraulic property measurements on core samples to produce accurate estimates of steady-state aquifer recharge, or, in cases where episodic recharge also occurs, the steady component of recharge. The method requires (1) measurement of the water content existing in the deep unsaturated zone at the location of a core sample retrieved for lab measurements, and (2) measurement of the core sample's unsaturated hydraulic conductivity over a range of water content that includes the value measured in situ. Both types of measurements must be done with high accuracy. Darcy's law applied with the measured unsaturated hydraulic conductivity and gravitational driving force provides recharge estimates.Aquifer recharge was estimated using Darcian and geophysical methods at a deep porous rock (calcarenite) experimental site in Canosa, southern Italy. Electrical Resistivity Tomography (ERT) and Vertical Electrical Sounding (VES) profiles were collected from the land surface to water table to provide data for Darcian recharge estimation. Volumetric water content was estimated from resistivity profiles using a laboratory-derived calibration function based on Archie's law for rock samples from the experimental site, where electrical conductivity of the rock was related to the porosity and water saturation. Multiple-depth core samples were evaluated using the Quasi-Steady Centrifuge (QSC) method to obtain hydraulic conductivity (K), matric potential (ψ), and water content (θ) estimates within this profile. Laboratory-determined unsaturated hydraulic conductivity ranged from 3.90 x 10-9 to 1.02 x 10-5 m
Heuristic introduction to estimation methods
International Nuclear Information System (INIS)
Feeley, J.J.; Griffith, J.M.
1982-08-01
The methods and concepts of optimal estimation and control have been very successfully applied in the aerospace industry during the past 20 years. Although similarities exist between the problems (control, modeling, measurements) in the aerospace and nuclear power industries, the methods and concepts have found only scant acceptance in the nuclear industry. Differences in technical language seem to be a major reason for the slow transfer of estimation and control methods to the nuclear industry. Therefore, this report was written to present certain important and useful concepts with a minimum of specialized language. By employing a simple example throughout the report, the importance of several information and uncertainty sources is stressed and optimal ways of using or allowing for these sources are presented. This report discusses optimal estimation problems. A future report will discuss optimal control problems
Sensitivity of Process Design due to Uncertainties in Property Estimates
DEFF Research Database (Denmark)
Hukkerikar, Amol; Jones, Mark Nicholas; Sarup, Bent
2012-01-01
The objective of this paper is to present a systematic methodology for performing analysis of sensitivity of process design due to uncertainties in property estimates. The methodology provides the following results: a) list of properties with critical importance on design; b) acceptable levels of...... in chemical processes. Among others vapour pressure accuracy for azeotropic mixtures is critical and needs to be measured or estimated with a ±0.25% accuracy to satisfy acceptable safety levels in design....
Lubis, A. S.; Muis, Z. A.; Pasaribu, M. I.
2017-03-01
The strength and durability of pavement construction is highly dependent on the properties and subgrade bearing capacity. This then led to the idea of the selection methods to estimate the density of the soil with the proper implementation of the system, fast and economical. This study aims to estimate the compaction parameter value namely the maximum dry unit weight (γd max) and optimum moisture content (wopt) of the soil properties value that stabilized with Portland Cement. Tests conducted in the laboratory of soil mechanics to determine the index properties (fines and liquid limit) and Standard Compaction Test. Soil samples that have Plasticity Index (PI) between 0-15% then mixed with Portland Cement (PC) with variations of 2%, 4%, 6%, 8% and 10%, each 10 samples. The results showed that the maximum dry unit weight (γd max) and wopt has a significant relationship with percent fines, liquid limit and the percentation of cement. Equation for the estimated maximum dry unit weight (γd max) = 1.782 - 0.011*LL + 0,000*F + 0.006*PS with R2 = 0.915 and the estimated optimum moisture content (wopt) = 3.441 + 0.594*LL + 0,025*F + 0,024*PS with R2 = 0.726.
A Copula-Based Method for Estimating Shear Strength Parameters of Rock Mass
Directory of Open Access Journals (Sweden)
Da Huang
2014-01-01
Full Text Available The shear strength parameters (i.e., the internal friction coefficient f and cohesion c are very important in rock engineering, especially for the stability analysis and reinforcement design of slopes and underground caverns. In this paper, a probabilistic method, Copula-based method, is proposed for estimating the shear strength parameters of rock mass. The optimal Copula functions between rock mass quality Q and f, Q and c for the marbles are established based on the correlation analyses of the results of 12 sets of in situ tests in the exploration adits of Jinping I-Stage Hydropower Station. Although the Copula functions are derived from the in situ tests for the marbles, they can be extended to be applied to other types of rock mass with similar geological and mechanical properties. For another 9 sets of in situ tests as an extensional application, by comparison with the results from Hoek-Brown criterion, the estimated values of f and c from the Copula-based method achieve better accuracy. Therefore, the proposed Copula-based method is an effective tool in estimating rock strength parameters.
A Method of Nuclear Software Reliability Estimation
International Nuclear Information System (INIS)
Park, Gee Yong; Eom, Heung Seop; Cheon, Se Woo; Jang, Seung Cheol
2011-01-01
A method on estimating software reliability for nuclear safety software is proposed. This method is based on the software reliability growth model (SRGM) where the behavior of software failure is assumed to follow the non-homogeneous Poisson process. Several modeling schemes are presented in order to estimate and predict more precisely the number of software defects based on a few of software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating the software test cases into the model. It is identified that this method is capable of accurately estimating the remaining number of software defects which are on-demand type directly affecting safety trip functions. The software reliability can be estimated from a model equation and one method of obtaining the software reliability is proposed
Thermodynamic estimation: Ionic materials
International Nuclear Information System (INIS)
Glasser, Leslie
2013-01-01
Thermodynamics establishes equilibrium relations among thermodynamic parameters (“properties”) and delineates the effects of variation of the thermodynamic functions (typically temperature and pressure) on those parameters. However, classical thermodynamics does not provide values for the necessary thermodynamic properties, which must be established by extra-thermodynamic means such as experiment, theoretical calculation, or empirical estimation. While many values may be found in the numerous collected tables in the literature, these are necessarily incomplete because either the experimental measurements have not been made or the materials may be hypothetical. The current paper presents a number of simple and relible estimation methods for thermodynamic properties, principally for ionic materials. The results may also be used as a check for obvious errors in published values. The estimation methods described are typically based on addition of properties of individual ions, or sums of properties of neutral ion groups (such as “double” salts, in the Simple Salt Approximation), or based upon correlations such as with formula unit volumes (Volume-Based Thermodynamics). - Graphical abstract: Thermodynamic properties of ionic materials may be readily estimated by summation of the properties of individual ions, by summation of the properties of ‘double salts’, and by correlation with formula volume. Such estimates may fill gaps in the literature, and may also be used as checks of published values. This simplicity arises from exploitation of the fact that repulsive energy terms are of short range and very similar across materials, while coulombic interactions provide a very large component of the attractive energy in ionic systems. Display Omitted - Highlights: • Estimation methods for thermodynamic properties of ionic materials are introduced. • Methods are based on summation of single ions, multiple salts, and correlations. • Heat capacity, entropy
Method-related estimates of sperm vitality.
Cooper, Trevor G; Hellenkemper, Barbara
2009-01-01
Comparison of methods that estimate viability of human spermatozoa by monitoring head membrane permeability revealed that wet preparations (whether using positive or negative phase-contrast microscopy) generated significantly higher percentages of nonviable cells than did air-dried eosin-nigrosin smears. Only with the latter method did the sum of motile (presumed live) and stained (presumed dead) preparations never exceed 100%, making this the method of choice for sperm viability estimates.
An Optimal Estimation Method to Obtain Surface Layer Turbulent Fluxes from Profile Measurements
Kang, D.
2015-12-01
In the absence of direct turbulence measurements, the turbulence characteristics of the atmospheric surface layer are often derived from measurements of the surface layer mean properties based on Monin-Obukhov Similarity Theory (MOST). This approach requires two levels of the ensemble mean wind, temperature, and water vapor, from which the fluxes of momentum, sensible heat, and water vapor can be obtained. When only one measurement level is available, the roughness heights and the assumed properties of the corresponding variables at the respective roughness heights are used. In practice, the temporal mean with large number of samples are used in place of the ensemble mean. However, in many situations the samples of data are taken from multiple levels. It is thus desirable to derive the boundary layer flux properties using all measurements. In this study, we used an optimal estimation approach to derive surface layer properties based on all available measurements. This approach assumes that the samples are taken from a population whose ensemble mean profile follows the MOST. An optimized estimate is obtained when the results yield a minimum cost function defined as a weighted summation of all error variance at each sample altitude. The weights are based one sample data variance and the altitude of the measurements. This method was applied to measurements in the marine atmospheric surface layer from a small boat using radiosonde on a tethered balloon where temperature and relative humidity profiles in the lowest 50 m were made repeatedly in about 30 minutes. We will present the resultant fluxes and the derived MOST mean profiles using different sets of measurements. The advantage of this method over the 'traditional' methods will be illustrated. Some limitations of this optimization method will also be discussed. Its application to quantify the effects of marine surface layer environment on radar and communication signal propagation will be shown as well.
A fast pulse phase estimation method for X-ray pulsar signals based on epoch folding
Directory of Open Access Journals (Sweden)
Xue Mengfan
2016-06-01
Full Text Available X-ray pulsar-based navigation (XPNAV is an attractive method for autonomous deep-space navigation in the future. The pulse phase estimation is a key task in XPNAV and its accuracy directly determines the navigation accuracy. State-of-the-art pulse phase estimation techniques either suffer from poor estimation accuracy, or involve the maximization of generally non-convex object function, thus resulting in a large computational cost. In this paper, a fast pulse phase estimation method based on epoch folding is presented. The statistical properties of the observed profile obtained through epoch folding are developed. Based on this, we recognize the joint probability distribution of the observed profile as the likelihood function and utilize a fast Fourier transform-based procedure to estimate the pulse phase. Computational complexity of the proposed estimator is analyzed as well. Experimental results show that the proposed estimator significantly outperforms the currently used cross-correlation (CC and nonlinear least squares (NLS estimators, while significantly reduces the computational complexity compared with NLS and maximum likelihood (ML estimators.
International Nuclear Information System (INIS)
Takemura, T.; Taniguchi, T.
2004-01-01
The purpose of this paper is to offer a new method for detecting stress in wood due to moisture along the lines of a theory reported previously. According to the theory, the stress in wood could be estimated from the moisture content of the wood and the power voltage of a microwave moisture meter (i.e., attenuation of the projected microwave). This seems to suggest a possibility of utilizing microwaves in the field of stress detection. To develop such an idea, the stress formulas were initially modified to the form of an uni-variable function of power voltage, and the application method of the formulas to detection was tried. Finally, these results were applied to the data of sugi (Cryptomeria japonica) lumber in the previous experiment. The estimated strains showed fairly good agreement with those observed. It could be concluded from this study that the proposed method might be available for detecting stress in wood due to moisture
The MIRD method of estimating absorbed dose
International Nuclear Information System (INIS)
Weber, D.A.
1991-01-01
The estimate of absorbed radiation dose from internal emitters provides the information required to assess the radiation risk associated with the administration of radiopharmaceuticals for medical applications. The MIRD (Medical Internal Radiation Dose) system of dose calculation provides a systematic approach to combining the biologic distribution data and clearance data of radiopharmaceuticals and the physical properties of radionuclides to obtain dose estimates. This tutorial presents a review of the MIRD schema, the derivation of the equations used to calculate absorbed dose, and shows how the MIRD schema can be applied to estimate dose from radiopharmaceuticals used in nuclear medicine
Statistical methods of estimating mining costs
Long, K.R.
2011-01-01
Until it was defunded in 1995, the U.S. Bureau of Mines maintained a Cost Estimating System (CES) for prefeasibility-type economic evaluations of mineral deposits and estimating costs at producing and non-producing mines. This system had a significant role in mineral resource assessments to estimate costs of developing and operating known mineral deposits and predicted undiscovered deposits. For legal reasons, the U.S. Geological Survey cannot update and maintain CES. Instead, statistical tools are under development to estimate mining costs from basic properties of mineral deposits such as tonnage, grade, mineralogy, depth, strip ratio, distance from infrastructure, rock strength, and work index. The first step was to reestimate "Taylor's Rule" which relates operating rate to available ore tonnage. The second step was to estimate statistical models of capital and operating costs for open pit porphyry copper mines with flotation concentrators. For a sample of 27 proposed porphyry copper projects, capital costs can be estimated from three variables: mineral processing rate, strip ratio, and distance from nearest railroad before mine construction began. Of all the variables tested, operating costs were found to be significantly correlated only with strip ratio.
Directory of Open Access Journals (Sweden)
A. J. Komkoua Mbienda
2013-01-01
Lee and Kesler (LK, and Ambrose-Walton (AW methods for estimating vapor pressures ( are tested against experimental data for a set of volatile organic compounds (VOC. required to determine gas-particle partitioning of such organic compounds is used as a parameter for simulating the dynamic of atmospheric aerosols. Here, we use the structure-property relationships of VOC to estimate . The accuracy of each of the aforementioned methods is also assessed for each class of compounds (hydrocarbons, monofunctionalized, difunctionalized, and tri- and more functionalized volatile organic species. It is found that the best method for each VOC depends on its functionality.
Estimation of sediment properties during benthic impact experiments
Digital Repository Service at National Institute of Oceanography (India)
Yamazaki, T.; Sharma, R
Sediment properties, such as water content and density, have been used to estimate the dry and wet weights, as well as the volume of sediment recovered and discharged, during benthic impact experiments conducted in the Pacific and Indian Oceans...
An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.
Directory of Open Access Journals (Sweden)
Darren Kidney
Full Text Available Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will
Fonseca, E. S. R.; de Jesus, M. E. P.
2007-07-01
The estimation of optical properties of highly turbid and opaque biological tissue is a difficult task since conventional purely optical methods rapidly loose sensitivity as the mean photon path length decreases. Photothermal methods, such as pulsed or frequency domain photothermal radiometry (FD-PTR), on the other hand, show remarkable sensitivity in experimental conditions that produce very feeble optical signals. Photothermal Radiometry is primarily sensitive to absorption coefficient yielding considerably higher estimation errors on scattering coefficients. Conversely, purely optical methods such as Local Diffuse Reflectance (LDR) depend mainly on the scattering coefficient and yield much better estimates of this parameter. Therefore, at moderate transport albedos, the combination of photothermal and reflectance methods can improve considerably the sensitivity of detection of tissue optical properties. The authors have recently proposed a novel method that combines FD-PTR with LDR, aimed at improving sensitivity on the determination of both optical properties. Signal analysis was performed by global fitting the experimental data to forward models based on Monte-Carlo simulations. Although this approach is accurate, the associated computational burden often limits its use as a forward model. Therefore, the application of analytical models based on the diffusion approximation offers a faster alternative. In this work, we propose the calculation of the diffuse reflectance and the fluence rate profiles under the δ-P I approximation. This approach is known to approximate fluence rate expressions better close to collimated sources and boundaries than the standard diffusion approximation (SDA). We extend this study to the calculation of the diffuse reflectance profiles. The ability of the δ-P I based model to provide good estimates of the absorption, scattering and anisotropy coefficients is tested against Monte-Carlo simulations over a wide range of scattering to
International Nuclear Information System (INIS)
Bashkatov, A N; Genina, Elina A; Kochubei, V I; Tuchin, Valerii V
2006-01-01
Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)
Estimating Infiltration Rates for a Loessal Silt Loam Using Soil Properties
M. Dean Knighton
1978-01-01
Soil properties were related to infiltration rates as measured by single-ringsteady-head infiltometers. The properties showing strong simple correlations were identified. Regression models were developed to estimate infiltration rate from several soil properties. The best model gave fair agreement to measured rates at another location.
DEFF Research Database (Denmark)
Jones, Mark Nicholas; Frutiger, Jerome; Abildskov, Jens
We present a new software tool called SAFEPROPS which is able to estimate major safety-related and environmental properties for organic compounds. SAFEPROPS provides accurate, reliable and fast predictions using the Marrero-Gani group contribution (MG-GC) method. It is implemented using Python...... as the main programming language, while the necessary parameters together with their correlation matrix are obtained from a SQLite database which has been populated using off-line parameter and error estimation routines (Eq. 3-8)....
Empirical estimates of CCN from aerosol optical properties at four remote sites
Directory of Open Access Journals (Sweden)
A. Jefferson
2010-07-01
Full Text Available This study presents an empirical method to estimate the CCN concentration as a function of percent supersaturation. The aerosol optical properties, backscatter fraction and single scatter albedo, function as proxies for the aerosol size and composition in a power law relationship to CCN. This method is tested at four sites with aged aerosol: SGP (Oklahoma, USA, FKB (Black Forest, Germany, HFE (Hefei, China and GRW (Graciosa, Azores. Each site represents a different aerosol type and thus demonstrates the method robustness and limitations. Good agreement was found between the calculated and measured CCN with slopes between 0.81 and 1.03 and correlation coefficients (r^{2} values between 0.59 and 0.67. The fit quality declined at low CCN concentrations.
A contact method of determination of thermophysical properties of rocks from core samples
International Nuclear Information System (INIS)
Gavril'ev, R.I.
1995-01-01
The zone of the action of thermal disturbances around a circular heat source on the surface of a semi-infinite body is estimated with the aim of using contact methods of determination of thermophysical properties of materials from core samples
Teletactile System Based on Mechanical Properties Estimation
Directory of Open Access Journals (Sweden)
Mauro M. Sette
2011-01-01
Full Text Available Tactile feedback is a major missing feature in minimally invasive procedures; it is an essential means of diagnosis and orientation during surgical procedures. Previous works have presented a remote palpation feedback system based on the coupling between a pressure sensor and a general haptic interface. Here a new approach is presented based on the direct estimation of the tissue mechanical properties and finally their presentation to the operator by means of a haptic interface. The approach presents different technical difficulties and some solutions are proposed: the implementation of a fast Young’s modulus estimation algorithm, the implementation of a real time finite element model, and finally the implementation of a stiffness estimation approach in order to guarantee the system’s stability. The work is concluded with an experimental evaluation of the whole system.
Effect of Uncertainties in Physical Property Estimates on Process Design - Sensitivity Analysis
DEFF Research Database (Denmark)
Hukkerikar, Amol; Jones, Mark Nicholas; Sin, Gürkan
for performing sensitivity of process design subject to uncertainties in the property estimates. To this end, first uncertainty analysis of the property models of pure components and their mixtures was performed in order to obtain the uncertainties in the estimated property values. As a next step, sensitivity......Chemical process design calculations require accurate and reliable physical and thermodynamic property data and property models of pure components and their mixtures in order to obtain reliable design parameters which help to achieve desired specifications. The uncertainties in the property values...... can arise from the experiments itself or from the property models employed. It is important to consider the effect of these uncertainties on the process design in order to assess the quality and reliability of the final design. The main objective of this work is to develop a systematic methodology...
System and method for traffic signal timing estimation
Dumazert, Julien; Claudel, Christian G.
2015-01-01
A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.
System and method for traffic signal timing estimation
Dumazert, Julien
2015-12-30
A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.
A Bayesian Markov geostatistical model for estimation of hydrogeological properties
International Nuclear Information System (INIS)
Rosen, L.; Gustafson, G.
1996-01-01
A geostatistical methodology based on Markov-chain analysis and Bayesian statistics was developed for probability estimations of hydrogeological and geological properties in the siting process of a nuclear waste repository. The probability estimates have practical use in decision-making on issues such as siting, investigation programs, and construction design. The methodology is nonparametric which makes it possible to handle information that does not exhibit standard statistical distributions, as is often the case for classified information. Data do not need to meet the requirements on additivity and normality as with the geostatistical methods based on regionalized variable theory, e.g., kriging. The methodology also has a formal way for incorporating professional judgments through the use of Bayesian statistics, which allows for updating of prior estimates to posterior probabilities each time new information becomes available. A Bayesian Markov Geostatistical Model (BayMar) software was developed for implementation of the methodology in two and three dimensions. This paper gives (1) a theoretical description of the Bayesian Markov Geostatistical Model; (2) a short description of the BayMar software; and (3) an example of application of the model for estimating the suitability for repository establishment with respect to the three parameters of lithology, hydraulic conductivity, and rock quality designation index (RQD) at 400--500 meters below ground surface in an area around the Aespoe Hard Rock Laboratory in southeastern Sweden
International Nuclear Information System (INIS)
Cheng, Wen-Long; Huang, Yong-Hua; Liu, Na; Ma, Ran
2012-01-01
Thermal conductivity is a key parameter for evaluating wellbore heat losses which plays an important role in determining the efficiency of steam injection processes. In this study, an unsteady formation heat-transfer model was established and a cost-effective in situ method by using stochastic approximation method based on well-log temperature data was presented. The proposed method was able to estimate the thermal conductivity and the volumetric heat capacity of geological formation simultaneously under the in situ conditions. The feasibility of the present method was assessed by a sample test, the results of which shown that the thermal conductivity and the volumetric heat capacity could be obtained with the relative errors of −0.21% and −0.32%, respectively. In addition, three field tests were conducted based on the easily obtainable well-log temperature data from the steam injection wells. It was found that the relative errors of thermal conductivity for the three field tests were within ±0.6%, demonstrating the excellent performance of the proposed method for calculating thermal conductivity. The relative errors of volumetric heat capacity ranged from −6.1% to −14.2% for the three field tests. Sensitivity analysis indicated that this was due to the low correlation between the volumetric heat capacity and the wellbore temperature, which was used to generate the judgment criterion. -- Highlights: ► A cost-effective in situ method for estimating thermal properties of formation was presented. ► Thermal conductivity and volumetric heat capacity can be estimated simultaneously by the proposed method. ► The relative error of thermal conductivity estimated was within ±0.6%. ► Sensitivity analysis was conducted to study the estimated results of thermal properties.
Accurate position estimation methods based on electrical impedance tomography measurements
Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.
2017-08-01
Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less
Caumes, Géraldine; Borrel, Alexandre; Abi Hussein, Hiba; Camproux, Anne-Claude; Regad, Leslie
2017-09-01
Small molecules interact with their protein target on surface cavities known as binding pockets. Pocket-based approaches are very useful in all of the phases of drug design. Their first step is estimating the binding pocket based on protein structure. The available pocket-estimation methods produce different pockets for the same target. The aim of this work is to investigate the effects of different pocket-estimation methods on the results of pocket-based approaches. We focused on the effect of three pocket-estimation methods on a pocket-ligand (PL) classification. This pocket-based approach is useful for understanding the correspondence between the pocket and ligand spaces and to develop pharmacological profiling models. We found pocket-estimation methods yield different binding pockets in terms of boundaries and properties. These differences are responsible for the variation in the PL classification results that can have an impact on the detected correspondence between pocket and ligand profiles. Thus, we highlighted the importance of the pocket-estimation method choice in pocket-based approaches. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
International Nuclear Information System (INIS)
Brookins, D.G.
1981-12-01
In this module geological and geochemical data pertinent to locating, mining, and milling of uranium are examined. Chapters are devoted to: uranium source characteristics; uranium ore exploration methods; uranium reserve estimation for sandstone deposits; mining; milling; conversion processes for uranium; and properties of uranium, thorium, plutonium and their oxides and carbides
A method for measuring the inertia properties of rigid bodies
Gobbi, M.; Mastinu, G.; Previati, G.
2011-01-01
A method for the measurement of the inertia properties of rigid bodies is presented. Given a rigid body and its mass, the method allows to measure (identify) the centre of gravity location and the inertia tensor during a single test. The proposed technique is based on the analysis of the free motion of a multi-cable pendulum to which the body under consideration is connected. The motion of the pendulum and the forces acting on the system are recorded and the inertia properties are identified by means of a proper mathematical procedure based on a least square estimation. After the body is positioned on the test rig, the full identification procedure takes less than 10 min. The natural frequencies of the pendulum and the accelerations involved are quite low, making this method suitable for many practical applications. In this paper, the proposed method is described and two test rigs are presented: the first is developed for bodies up to 3500 kg and the second for bodies up to 400 kg. A validation of the measurement method is performed with satisfactory results. The test rig holds a third part quality certificate according to an ISO 9001 standard and could be scaled up to measure the inertia properties of huge bodies, such as trucks, airplanes or even ships.
Reverse survival method of fertility estimation: An evaluation
Directory of Open Access Journals (Sweden)
Thomas Spoorenberg
2014-07-01
Full Text Available Background: For the most part, demographers have relied on the ever-growing body of sample surveys collecting full birth history to derive total fertility estimates in less statistically developed countries. Yet alternative methods of fertility estimation can return very consistent total fertility estimates by using only basic demographic information. Objective: This paper evaluates the consistency and sensitivity of the reverse survival method -- a fertility estimation method based on population data by age and sex collected in one census or a single-round survey. Methods: A simulated population was first projected over 15 years using a set of fertility and mortality age and sex patterns. The projected population was then reverse survived using the Excel template FE_reverse_4.xlsx, provided with Timæus and Moultrie (2012. Reverse survival fertility estimates were then compared for consistency to the total fertility rates used to project the population. The sensitivity was assessed by introducing a series of distortions in the projection of the population and comparing the difference implied in the resulting fertility estimates. Results: The reverse survival method produces total fertility estimates that are very consistent and hardly affected by erroneous assumptions on the age distribution of fertility or by the use of incorrect mortality levels, trends, and age patterns. The quality of the age and sex population data that is 'reverse survived' determines the consistency of the estimates. The contribution of the method for the estimation of past and present trends in total fertility is illustrated through its application to the population data of five countries characterized by distinct fertility levels and data quality issues. Conclusions: Notwithstanding its simplicity, the reverse survival method of fertility estimation has seldom been applied. The method can be applied to a large body of existing and easily available population data
Comparison of Thermal Properties Measured by Different Methods
International Nuclear Information System (INIS)
Sundberg, Jan; Kukkonen, Ilmo; Haelldahl, Lars
2003-04-01
A strategy for a thermal site descriptive model of bedrock is under development at SKB. In the model different kinds of uncertainties exist. Some of these uncertainties are related to the potential errors in the methods used for determining thermal properties of rock. In two earlier investigations thermal properties of rock samples were analysed according to the TPS method (transient plane source). Thermal conductivity and thermal diffusivity were determined using the TPS method. For a comparison, the same samples have been measured at the Geological Survey of Finland (GSF), using different laboratory methods. In this later investigation, the thermal conductivity was determined using the divided-bar method and the specific heat capacity using a calorimetric method. The mean differences between the results of different methods are relatively low but the results of individual samples show large variations. The thermal conductivity measured by the divided bar method gives for most samples slightly higher values, in average about 3%, than the TPS method. The specific heat capacity measured by the calorimetric method gives lower values, in average about 2%, than the TPS method. Consequently, the thermal diffusivity calculated from thermal conductivity and specific heat capacity gives higher values, in average about 6%, than the TPS method. Reasons for the differences are estimated mainly to be dependent on differences between the samples, errors in the temperature dependence of specific heat and in the transformation from volumetric to specific heat. The TPS measurements are performed using two pieces (sub-samples) of rock. Only one of these two sub-samples was measured using the divided bar method and the calorimetric method. Further, sample preparation involved changes in the size of some of the samples. The mean differences between the results of different methods are within the margins of error reported by the measuring laboratories. However, systematic errors in
Comparison of Thermal Properties Measured by Different Methods
Energy Technology Data Exchange (ETDEWEB)
Sundberg, Jan [Geo Innova AB, Linkoeping (Sweden); Kukkonen, Ilmo [Geological Survey of Finland, Helsinki (Finland); Haelldahl, Lars [Hot Disk AB, Uppsala (Sweden)
2003-04-01
A strategy for a thermal site descriptive model of bedrock is under development at SKB. In the model different kinds of uncertainties exist. Some of these uncertainties are related to the potential errors in the methods used for determining thermal properties of rock. In two earlier investigations thermal properties of rock samples were analysed according to the TPS method (transient plane source). Thermal conductivity and thermal diffusivity were determined using the TPS method. For a comparison, the same samples have been measured at the Geological Survey of Finland (GSF), using different laboratory methods. In this later investigation, the thermal conductivity was determined using the divided-bar method and the specific heat capacity using a calorimetric method. The mean differences between the results of different methods are relatively low but the results of individual samples show large variations. The thermal conductivity measured by the divided bar method gives for most samples slightly higher values, in average about 3%, than the TPS method. The specific heat capacity measured by the calorimetric method gives lower values, in average about 2%, than the TPS method. Consequently, the thermal diffusivity calculated from thermal conductivity and specific heat capacity gives higher values, in average about 6%, than the TPS method. Reasons for the differences are estimated mainly to be dependent on differences between the samples, errors in the temperature dependence of specific heat and in the transformation from volumetric to specific heat. The TPS measurements are performed using two pieces (sub-samples) of rock. Only one of these two sub-samples was measured using the divided bar method and the calorimetric method. Further, sample preparation involved changes in the size of some of the samples. The mean differences between the results of different methods are within the margins of error reported by the measuring laboratories. However, systematic errors in
Modulating functions method for parameters estimation in the fifth order KdV equation
Asiri, Sharefa M.
2017-07-25
In this work, the modulating functions method is proposed for estimating coefficients in higher-order nonlinear partial differential equation which is the fifth order Kortewegde Vries (KdV) equation. The proposed method transforms the problem into a system of linear algebraic equations of the unknowns. The statistical properties of the modulating functions solution are described in this paper. In addition, guidelines for choosing the number of modulating functions, which is an important design parameter, are provided. The effectiveness and robustness of the proposed method are shown through numerical simulations in both noise-free and noisy cases.
Estimating patient-specific soft-tissue properties in a TKA knee.
Ewing, Joseph A; Kaufman, Michelle K; Hutter, Erin E; Granger, Jeffrey F; Beal, Matthew D; Piazza, Stephen J; Siston, Robert A
2016-03-01
Surgical technique is one factor that has been identified as critical to success of total knee arthroplasty. Researchers have shown that computer simulations can aid in determining how decisions in the operating room generally affect post-operative outcomes. However, to use simulations to make clinically relevant predictions about knee forces and motions for a specific total knee patient, patient-specific models are needed. This study introduces a methodology for estimating knee soft-tissue properties of an individual total knee patient. A custom surgical navigation system and stability device were used to measure the force-displacement relationship of the knee. Soft-tissue properties were estimated using a parameter optimization that matched simulated tibiofemoral kinematics with experimental tibiofemoral kinematics. Simulations using optimized ligament properties had an average root mean square error of 3.5° across all tests while simulations using generic ligament properties taken from literature had an average root mean square error of 8.4°. Specimens showed large variability among ligament properties regardless of similarities in prosthetic component alignment and measured knee laxity. These results demonstrate the importance of soft-tissue properties in determining knee stability, and suggest that to make clinically relevant predictions of post-operative knee motions and forces using computer simulations, patient-specific soft-tissue properties are needed. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
International Nuclear Information System (INIS)
Lin, M.; Brechtel, C.E.; Hardy, M.P.; Bauer, S.J.
1992-01-01
This paper presents a method of estimating the rock mass properties for the welded and nonwelded tuffs based on currently available information on intact rock and joint characteristics at the Yucca Mountain site. Variability of the expected ground conditions at the potential repository horizon (the TSw2 thermomechanical unit) and in the Calico Hills nonwelded tuffs is accommodated by defining five rock mass quality categories in each unit based upon assumed and observed distributions of the data
Statistically Efficient Methods for Pitch and DOA Estimation
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2013-01-01
, it was recently considered to estimate the DOA and pitch jointly. In this paper, we propose two novel methods for DOA and pitch estimation. They both yield maximum-likelihood estimates in white Gaussian noise scenar- ios, where the SNR may be different across channels, as opposed to state-of-the-art methods......Traditionally, direction-of-arrival (DOA) and pitch estimation of multichannel, periodic sources have been considered as two separate problems. Separate estimation may render the task of resolving sources with similar DOA or pitch impossible, and it may decrease the estimation accuracy. Therefore...
Directory of Open Access Journals (Sweden)
Vojislav V Mitić
2011-05-01
Full Text Available Methods of stereological study are of great importance for structural research of electronic ceramic materials including BaTiO3-ceramic materials. The broad application of ceramics, based on barium-titanate, in advanced electronics nowadays demands a constant research of its structure, that through the correlation structureproperties, a fundamental in the basic materials properties prognosis triad (technology-structure-properties, leads to further prognosis and properties design of these ceramics. Microstructure properties of BaTiO3- ceramic material, expressed in grains' boundary contact, are of basic importance for electric properties of this material, particularly the capacity. In this paper, a significant step towards establishing control under capacitive properties of BaTiO3-ceramics is being done by estimating the number of grains contact surfaces. Defining an efficient stereology method for estimating the number of BaTiO3-ceramic grains contact surfaces, we have started from a mathematical model of mutual grains distribution in the prescribed volume of BaTiO3-ceramic sample. Since the real microstructure morphology of BaTiO3-ceramics is in some way disordered, spherical shaped grains, using computer-modelling methods, are approximated by polyhedra with a great number of small convex polygons. By dividing the volume of BaTiO3-ceramic sample with the definite number of parallel planes, according to a given pace, into the intersection plane a certain number of grains contact surfaces are identified. According to quantitative estimation of 2D stereological parameters the modelled 3D internal microstructure is obtained. Experiments were made by using the scanning electronic microscopy (SEM method with the ceramic samples prepared under pressing pressures up to 150 MPa and sintering temperature up to 1370°C while the obtained microphotographs were used as a base of confirming the validity of presented stereology method. This paper, by applying
Estimating functions for inhomogeneous Cox processes
DEFF Research Database (Denmark)
Waagepetersen, Rasmus
2006-01-01
Estimation methods are reviewed for inhomogeneous Cox processes with tractable first and second order properties. We illustrate the various suggestions by means of data examples.......Estimation methods are reviewed for inhomogeneous Cox processes with tractable first and second order properties. We illustrate the various suggestions by means of data examples....
Development of Property Models with Uncertainty Estimate for Process Design under Uncertainty
DEFF Research Database (Denmark)
Hukkerikar, Amol; Sarup, Bent; Abildskov, Jens
more reliable predictions with a new and improved set of model parameters for GC (group contribution) based and CI (atom connectivity index) based models and to quantify the uncertainties in the estimated property values from a process design point-of-view. This includes: (i) parameter estimation using....... The comparison of model prediction uncertainties with reported range of measurement uncertainties is presented for the properties with related available data. The application of the developed methodology to quantify the effect of these uncertainties on the design of different unit operations (distillation column......, the developed methodology can be used to quantify the sensitivity of process design to uncertainties in property estimates; obtain rationally the risk/safety factors in process design; and identify additional experimentation needs in order to reduce most critical uncertainties....
A Fast Soft Bit Error Rate Estimation Method
Directory of Open Access Journals (Sweden)
Ait-Idir Tarik
2010-01-01
Full Text Available We have suggested in a previous publication a method to estimate the Bit Error Rate (BER of a digital communications system instead of using the famous Monte Carlo (MC simulation. This method was based on the estimation of the probability density function (pdf of soft observed samples. The kernel method was used for the pdf estimation. In this paper, we suggest to use a Gaussian Mixture (GM model. The Expectation Maximisation algorithm is used to estimate the parameters of this mixture. The optimal number of Gaussians is computed by using Mutual Information Theory. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. Simulation results are presented to compare the three mentioned methods: Monte Carlo, Kernel and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access system and show that attractive performance is achieved compared with conventional MC or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the required simulation run-time, even at very low BER.
Coalescent methods for estimating phylogenetic trees.
Liu, Liang; Yu, Lili; Kubatko, Laura; Pearl, Dennis K; Edwards, Scott V
2009-10-01
We review recent models to estimate phylogenetic trees under the multispecies coalescent. Although the distinction between gene trees and species trees has come to the fore of phylogenetics, only recently have methods been developed that explicitly estimate species trees. Of the several factors that can cause gene tree heterogeneity and discordance with the species tree, deep coalescence due to random genetic drift in branches of the species tree has been modeled most thoroughly. Bayesian approaches to estimating species trees utilizes two likelihood functions, one of which has been widely used in traditional phylogenetics and involves the model of nucleotide substitution, and the second of which is less familiar to phylogeneticists and involves the probability distribution of gene trees given a species tree. Other recent parametric and nonparametric methods for estimating species trees involve parsimony criteria, summary statistics, supertree and consensus methods. Species tree approaches are an appropriate goal for systematics, appear to work well in some cases where concatenation can be misleading, and suggest that sampling many independent loci will be paramount. Such methods can also be challenging to implement because of the complexity of the models and computational time. In addition, further elaboration of the simplest of coalescent models will be required to incorporate commonly known issues such as deviation from the molecular clock, gene flow and other genetic forces.
Statistical error estimation of the Feynman-α method using the bootstrap method
International Nuclear Information System (INIS)
Endo, Tomohiro; Yamamoto, Akio; Yagi, Takahiro; Pyeon, Cheol Ho
2016-01-01
Applicability of the bootstrap method is investigated to estimate the statistical error of the Feynman-α method, which is one of the subcritical measurement techniques on the basis of reactor noise analysis. In the Feynman-α method, the statistical error can be simply estimated from multiple measurements of reactor noise, however it requires additional measurement time to repeat the multiple times of measurements. Using a resampling technique called 'bootstrap method' standard deviation and confidence interval of measurement results obtained by the Feynman-α method can be estimated as the statistical error, using only a single measurement of reactor noise. In order to validate our proposed technique, we carried out a passive measurement of reactor noise without any external source, i.e. with only inherent neutron source by spontaneous fission and (α,n) reactions in nuclear fuels at the Kyoto University Criticality Assembly. Through the actual measurement, it is confirmed that the bootstrap method is applicable to approximately estimate the statistical error of measurement results obtained by the Feynman-α method. (author)
The Infulence of Microarc Oxidation Method Modes on the Properties of Coatings
Directory of Open Access Journals (Sweden)
N.Y. Dudareva
2014-07-01
Full Text Available The experimental studies of the properties of the hardened surface layer, developed by the microarc oxidation method (MAO on the surface of Al-Si ingots from AK12D alloy have been presented here. The effect of concentration of the electrolyte components on the properties of the MAO coating, such as microhardness, thickness, porosity have been studied. The corresponding regression equations to estimate the influence of the process parameters on the quality of the developed MAO-layer, have been set up.
Shimizu, Chihiro
2014-01-01
How exactly should one estimate property investment returns? Investors in property aim to maximize capital gains from price increases and income generated by the property. How are the returns on investment in property determined based on its characteristics, and what kind of market characteristics does it have? Focusing on the Tokyo commercial property market and residential property market, the purpose of this paper was to break down and measure the micro-structure of property investment ret...
A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT
MIKOSCH, T; WANG, QA
We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.
Using physical properties of molten glass to estimate glass composition
International Nuclear Information System (INIS)
Choi, Kwan Sik; Yang, Kyoung Hwa; Park, Jong Kil
1997-01-01
A vitrification process is under development in KEPRI for the treatment of low-and medium-level radioactive waste. Although the project is for developing and building Vitrification Pilot Plant in Korea, one of KEPRI's concerns is the quality control of the vitrified glass. This paper discusses a methodology for the estimation of glass composition by on-line measurement of molten glass properties, which could be applied to the plant for real-time quality control of the glass product. By remotely measuring viscosity and density of the molten glass, the glass characteristics such as composition can be estimated and eventually controlled. For this purpose, using the database of glass composition vs. physical properties in isothermal three-component system of SiO 2 -Na 2 O-B 2 O 3 , a software TERNARY has been developed which determines the glass composition by using two known physical properties (e.g. density and viscosity)
Estimation for Retention Factor of Isoflavones in Physico-Chemical Properties
International Nuclear Information System (INIS)
Lee, Seung Ki; Row, Kyung Ho
2003-01-01
The estimation of retention factors by correlation equations with physico-chmical properties maybe helpful in chromatographic work. The physico-chemical properties were water solubility (S), hydrophobicity (P), total energy (Et), connectivity index 1 (1χ), hydrophilic-lipophlic balance (x) and hydrophilic surface area (h) of isoflavones. The retention factors were experimentally measured by RP-HPLC. Especially, the empirical regulations of water solubility and hydrophobicity were expressed in a linear form. The equation between retention factors and various physico-chemical properties of isoflavones was suggested as k = a0 + a 1 log S + a2 log P Q + a3 (E t ) + a4 ( 1 χ) + a5 (x) + a6 (h), and the correlation coefficients estimated were relatively higher than 0.95. The empirical equations might be successfully used for a prediction of the various chromatographic characteristics of substances, with a similar chemical structure
Method And Apparatus For Two Dimensional Surface Property Analysis Based On Boundary Measurement
Richardson, John G.
2005-11-15
An apparatus and method for determining properties of a conductive film is disclosed. A plurality of probe locations selected around a periphery of the conductive film define a plurality of measurement lines between each probe location and all other probe locations. Electrical resistance may be measured along each of the measurement lines. A lumped parameter model may be developed based on the measured values of electrical resistance. The lumped parameter model may be used to estimate resistivity at one or more selected locations encompassed by the plurality of probe locations. The resistivity may be extrapolated to other physical properties if the conductive film includes a correlation between resistivity and the other physical properties. A profile of the conductive film may be developed by determining resistivity at a plurality of locations. The conductive film may be applied to a structure such that resistivity may be estimated and profiled for the structure's surface.
DEFF Research Database (Denmark)
Huang, L.; Ernstoff, Alexi; Xu, H.
2017-01-01
Organic chemicals encapsulated in beverage and food packaging can migrate to the food and lead to human exposures via ingestion. The packaging-food (Kpf) partition coefficient is a key parameter to estimate the chemical migration from packaging materials. Previous studies have simply set Kpf to 1...... or 1000, or provided separate linear correlations for several discrete values of ethanol equivalencies of food simulants (EtOH-eq). The aim of the present study is to develop a single quantitative property-property relationship (QPPR) valid for different chemical-packaging combinations and for water...... because only two packaging types are included. This preliminary QPPR demonstrates that the Kpf for various chemicalpackaging-food combinations can be estimated by a single linear correlation. Based on more than 1000 collected Kpf in 15 materials, we will present extensive results for other packaging types...
Dabiri, M.; Ghafouri, M.; Rohani Raftar, H. R.; Björk, T.
2018-03-01
Methods to estimate the strain-life curve, which were divided into three categories: simple approximations, artificial neural network-based approaches and continuum damage mechanics models, were examined, and their accuracy was assessed in strain-life evaluation of a direct-quenched high-strength steel. All the prediction methods claim to be able to perform low-cycle fatigue analysis using available or easily obtainable material properties, thus eliminating the need for costly and time-consuming fatigue tests. Simple approximations were able to estimate the strain-life curve with satisfactory accuracy using only monotonic properties. The tested neural network-based model, although yielding acceptable results for the material in question, was found to be overly sensitive to the data sets used for training and showed an inconsistency in estimation of the fatigue life and fatigue properties. The studied continuum damage-based model was able to produce a curve detecting early stages of crack initiation. This model requires more experimental data for calibration than approaches using simple approximations. As a result of the different theories underlying the analyzed methods, the different approaches have different strengths and weaknesses. However, it was found that the group of parametric equations categorized as simple approximations are the easiest for practical use, with their applicability having already been verified for a broad range of materials.
A Generalized Autocovariance Least-Squares Method for Covariance Estimation
DEFF Research Database (Denmark)
Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad
2007-01-01
A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter.......A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter....
Spatial Bias in Field-Estimated Unsaturated Hydraulic Properties
Energy Technology Data Exchange (ETDEWEB)
HOLT,ROBERT M.; WILSON,JOHN L.; GLASS JR.,ROBERT J.
2000-12-21
Hydraulic property measurements often rely on non-linear inversion models whose errors vary between samples. In non-linear physical measurement systems, bias can be directly quantified and removed using calibration standards. In hydrologic systems, field calibration is often infeasible and bias must be quantified indirectly. We use a Monte Carlo error analysis to indirectly quantify spatial bias in the saturated hydraulic conductivity, K{sub s}, and the exponential relative permeability parameter, {alpha}, estimated using a tension infiltrometer. Two types of observation error are considered, along with one inversion-model error resulting from poor contact between the instrument and the medium. Estimates of spatial statistics, including the mean, variance, and variogram-model parameters, show significant bias across a parameter space representative of poorly- to well-sorted silty sand to very coarse sand. When only observation errors are present, spatial statistics for both parameters are best estimated in materials with high hydraulic conductivity, like very coarse sand. When simple contact errors are included, the nature of the bias changes dramatically. Spatial statistics are poorly estimated, even in highly conductive materials. Conditions that permit accurate estimation of the statistics for one of the parameters prevent accurate estimation for the other; accurate regions for the two parameters do not overlap in parameter space. False cross-correlation between estimated parameters is created because estimates of K{sub s} also depend on estimates of {alpha} and both parameters are estimated from the same data.
Directory of Open Access Journals (Sweden)
Gener Tadeu Pereira
2013-10-01
Full Text Available The sampling scheme is essential in the investigation of the spatial variability of soil properties in Soil Science studies. The high costs of sampling schemes optimized with additional sampling points for each physical and chemical soil property, prevent their use in precision agriculture. The purpose of this study was to obtain an optimal sampling scheme for physical and chemical property sets and investigate its effect on the quality of soil sampling. Soil was sampled on a 42-ha area, with 206 geo-referenced points arranged in a regular grid spaced 50 m from each other, in a depth range of 0.00-0.20 m. In order to obtain an optimal sampling scheme for every physical and chemical property, a sample grid, a medium-scale variogram and the extended Spatial Simulated Annealing (SSA method were used to minimize kriging variance. The optimization procedure was validated by constructing maps of relative improvement comparing the sample configuration before and after the process. A greater concentration of recommended points in specific areas (NW-SE direction was observed, which also reflects a greater estimate variance at these locations. The addition of optimal samples, for specific regions, increased the accuracy up to 2 % for chemical and 1 % for physical properties. The use of a sample grid and medium-scale variogram, as previous information for the conception of additional sampling schemes, was very promising to determine the locations of these additional points for all physical and chemical soil properties, enhancing the accuracy of kriging estimates of the physical-chemical properties.
International Nuclear Information System (INIS)
Fraser, D.G.; Refson, K.
1992-01-01
The molecular dynamics calculations reported above give calculated P-V-T properties for H 2 O up to 1500 K and 100 GPa, which agree remarkably well with the available experimental data. We also observe the phase transition to a crystalline, orientationally disordered cubic ice structure. No account was taken of molecular flexibility in these calculations nor of potential dissociation at high pressures as suggested by Hamman (1981). However, we note that the closest next-nearest-neighbour O-H approach remains significantly greater than the TIP4P fixed O-H bond length within the water molecule for all pressures studied. The equation of state proposed here should be useful for estimating the properties of H 2 O at up to 1500 K and 100 G Pa (1 Mbar) and is much easier to use in practice than modified Redlich Kwong equations. Extension of these methods to the studies of other fluids and of fluid mixtures at high temperatures and pressures will require good potential models for the species involved, and this is likely to involve a combination of good ab initio work and semiempirical modelling. Once developed, these models should allow robust predictions of thermodynamic properties beyond the range of the experimental data on the basis of fundamental molecular information
Unemployment estimation: Spatial point referenced methods and models
Pereira, Soraia
2017-06-26
Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to
Directory of Open Access Journals (Sweden)
V.N. Oparin
2015-06-01
Full Text Available A new method to test rock abrasiveness is proposed based upon the dependence of rock abrasiveness on their structural and physico-mechanical properties. The article describes the procedure of presentation of properties that govern rock abrasiveness on a canonical scale by dimensionless components, and the integrated estimation of the properties by a generalized index. The obtained results are compared with the known classifications of rock abrasiveness.
Estimated Interest Rate Rules: Do they Determine Determinacy Properties?
DEFF Research Database (Denmark)
Jensen, Henrik
2011-01-01
I demonstrate that econometric estimations of nominal interest rate rules may tell little, if anything, about an economy's determinacy properties. In particular, correct inference about the interest-rate response to inflation provides no information about determinacy. Instead, it could reveal...
On the Methods for Estimating the Corneoscleral Limbus.
Jesus, Danilo A; Iskander, D Robert
2017-08-01
The aim of this study was to develop computational methods for estimating limbus position based on the measurements of three-dimensional (3-D) corneoscleral topography and ascertain whether corneoscleral limbus routinely estimated from the frontal image corresponds to that derived from topographical information. Two new computational methods for estimating the limbus position are proposed: One based on approximating the raw anterior eye height data by series of Zernike polynomials and one that combines the 3-D corneoscleral topography with the frontal grayscale image acquired with the digital camera in-built in the profilometer. The proposed methods are contrasted against a previously described image-only-based procedure and to a technique of manual image annotation. The estimates of corneoscleral limbus radius were characterized with a high precision. The group average (mean ± standard deviation) of the maximum difference between estimates derived from all considered methods was 0.27 ± 0.14 mm and reached up to 0.55 mm. The four estimating methods lead to statistically significant differences (nonparametric ANOVA (the Analysis of Variance) test, p 0.05). Precise topographical limbus demarcation is possible either from the frontal digital images of the eye or from the 3-D topographical information of corneoscleral region. However, the results demonstrated that the corneoscleral limbus estimated from the anterior eye topography does not always correspond to that obtained through image-only based techniques. The experimental findings have shown that 3-D topography of anterior eye, in the absence of a gold standard, has the potential to become a new computational methodology for estimating the corneoscleral limbus.
Comparison of methods for estimating carbon in harvested wood products
International Nuclear Information System (INIS)
Claudia Dias, Ana; Louro, Margarida; Arroja, Luis; Capela, Isabel
2009-01-01
There is a great diversity of methods for estimating carbon storage in harvested wood products (HWP) and, therefore, it is extremely important to agree internationally on the methods to be used in national greenhouse gas inventories. This study compares three methods for estimating carbon accumulation in HWP: the method suggested by Winjum et al. (Winjum method), the tier 2 method proposed by the IPCC Good Practice Guidance for Land Use, Land-Use Change and Forestry (GPG LULUCF) (GPG tier 2 method) and a method consistent with GPG LULUCF tier 3 methods (GPG tier 3 method). Carbon accumulation in HWP was estimated for Portugal under three accounting approaches: stock-change, production and atmospheric-flow. The uncertainty in the estimates was also evaluated using Monte Carlo simulation. The estimates of carbon accumulation in HWP obtained with the Winjum method differed substantially from the estimates obtained with the other methods, because this method tends to overestimate carbon accumulation with the stock-change and the production approaches and tends to underestimate carbon accumulation with the atmospheric-flow approach. The estimates of carbon accumulation provided by the GPG methods were similar, but the GPG tier 3 method reported the lowest uncertainties. For the GPG methods, the atmospheric-flow approach produced the largest estimates of carbon accumulation, followed by the production approach and the stock-change approach, by this order. A sensitivity analysis showed that using the ''best'' available data on production and trade of HWP produces larger estimates of carbon accumulation than using data from the Food and Agriculture Organization. (author)
Comparison of two perturbation methods to estimate the land surface modeling uncertainty
Su, H.; Houser, P.; Tian, Y.; Kumar, S.; Geiger, J.; Belvedere, D.
2007-12-01
In land surface modeling, it is almost impossible to simulate the land surface processes without any error because the earth system is highly complex and the physics of the land processes has not yet been understood sufficiently. In most cases, people want to know not only the model output but also the uncertainty in the modeling, to estimate how reliable the modeling is. Ensemble perturbation is an effective way to estimate the uncertainty in land surface modeling, since land surface models are highly nonlinear which makes the analytical approach not applicable in this estimation. The ideal perturbation noise is zero mean Gaussian distribution, however, this requirement can't be satisfied if the perturbed variables in land surface model have physical boundaries because part of the perturbation noises has to be removed to feed the land surface models properly. Two different perturbation methods are employed in our study to investigate their impact on quantifying land surface modeling uncertainty base on the Land Information System (LIS) framework developed by NASA/GSFC land team. One perturbation method is the built-in algorithm named "STATIC" in LIS version 5; the other is a new perturbation algorithm which was recently developed to minimize the overall bias in the perturbation by incorporating additional information from the whole time series for the perturbed variable. The statistical properties of the perturbation noise generated by the two different algorithms are investigated thoroughly by using a large ensemble size on a NASA supercomputer and then the corresponding uncertainty estimates based on the two perturbation methods are compared. Their further impacts on data assimilation are also discussed. Finally, an optimal perturbation method is suggested.
Wu, Hulin; Xue, Hongqi; Kumar, Arun
2012-06-01
Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.
Evaluation of non cyanide methods for hemoglobin estimation
Directory of Open Access Journals (Sweden)
Vinaya B Shah
2011-01-01
Full Text Available Background: The hemoglobincyanide method (HiCN method for measuring hemoglobin is used extensively worldwide; its advantages are the ready availability of a stable and internationally accepted reference standard calibrator. However, its use may create a problem, as the waste disposal of large volumes of reagent containing cyanide constitutes a potential toxic hazard. Aims and Objective: As an alternative to drabkin`s method of Hb estimation, we attempted to estimate hemoglobin by other non-cyanide methods: alkaline hematin detergent (AHD-575 using Triton X-100 as lyser and alkaline- borax method using quarternary ammonium detergents as lyser. Materials and Methods: The hemoglobin (Hb results on 200 samples of varying Hb concentrations obtained by these two cyanide free methods were compared with a cyanmethemoglobin method on a colorimeter which is light emitting diode (LED based. Hemoglobin was also estimated in one hundred blood donors and 25 blood samples of infants and compared by these methods. Statistical analysis used was Pearson`s correlation coefficient. Results: The response of the non cyanide method is linear for serially diluted blood samples over the Hb concentration range from 3gm/dl -20 gm/dl. The non cyanide methods has a precision of + 0.25g/dl (coefficient of variation= (2.34% and is suitable for use with fixed wavelength or with colorimeters at wavelength- 530 nm and 580 nm. Correlation of these two methods was excellent (r=0.98. The evaluation has shown it to be as reliable and reproducible as HiCN for measuring hemoglobin at all concentrations. The reagents used in non cyanide methods are non-biohazardous and did not affect the reliability of data determination and also the cost was less than HiCN method. Conclusions: Thus, non cyanide methods of Hb estimation offer possibility of safe and quality Hb estimation and should prove useful for routine laboratory use. Non cyanide methods is easily incorporated in hemobloginometers
The Application Research of Inverse Finite Element Method for Frame Deformation Estimation
Directory of Open Access Journals (Sweden)
Yong Zhao
2017-01-01
Full Text Available A frame deformation estimation algorithm is investigated for the purpose of real-time control and health monitoring of flexible lightweight aerospace structures. The inverse finite element method (iFEM for beam deformation estimation was recently proposed by Gherlone and his collaborators. The methodology uses a least squares principle involving section strains of Timoshenko theory for stretching, torsion, bending, and transverse shearing. The proposed methodology is based on stain-displacement relations only, without invoking force equilibrium. Thus, the displacement fields can be reconstructed without the knowledge of structural mode shapes, material properties, and applied loading. In this paper, the number of the locations where the section strains are evaluated in the iFEM is discussed firstly, and the algorithm is subsequently investigated through a simple supplied beam and an experimental aluminum wing-like frame model in the loading case of end-node force. The estimation results from the iFEM are compared with reference displacements from optical measurement and computational analysis, and the accuracy of the algorithm estimation is quantified by the root-mean-square error and percentage difference error.
Dispersion curve estimation via a spatial covariance method with ultrasonic wavefield imaging.
Chong, See Yenn; Todd, Michael D
2018-05-01
Numerous Lamb wave dispersion curve estimation methods have been developed to support damage detection and localization strategies in non-destructive evaluation/structural health monitoring (NDE/SHM) applications. In this paper, the covariance matrix is used to extract features from an ultrasonic wavefield imaging (UWI) scan in order to estimate the phase and group velocities of S0 and A0 modes. A laser ultrasonic interrogation method based on a Q-switched laser scanning system was used to interrogate full-field ultrasonic signals in a 2-mm aluminum plate at five different frequencies. These full-field ultrasonic signals were processed in three-dimensional space-time domain. Then, the time-dependent covariance matrices of the UWI were obtained based on the vector variables in Cartesian and polar coordinate spaces for all time samples. A spatial covariance map was constructed to show spatial correlations within the full wavefield. It was observed that the variances may be used as a feature for S0 and A0 mode properties. The phase velocity and the group velocity were found using a variance map and an enveloped variance map, respectively, at five different frequencies. This facilitated the estimation of Lamb wave dispersion curves. The estimated dispersion curves of the S0 and A0 modes showed good agreement with the theoretical dispersion curves. Copyright © 2018 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
J. C. Bergès
2010-01-01
Full Text Available This paper presents a new rainfall estimation method, EPSAT-SG which is a frame for method design. The first implementation has been carried out to meet the requirement of the AMMA database on a West African domain. The rainfall estimation relies on two intermediate products: a rainfall probability and a rainfall potential intensity. The first one is computed from MSG/SEVIRI by a feed forward neural network. First evaluation results show better properties than direct precipitation intensity assessment by geostationary satellite infra-red sensors. The second product can be interpreted as a conditional rainfall intensity and, in the described implementation, it is extracted from GPCP-1dd. Various implementation options are discussed and comparison of this embedded product with 3B42 estimates demonstrates the importance of properly managing the temporal discontinuity. The resulting accumulated rainfall field can be presented as a GPCP downscaling. A validation based on ground data supplied by AGRHYMET (Niamey indicates that the estimation error has been reduced in this process. The described method could be easily adapted to other geographical area and operational environment.
International Nuclear Information System (INIS)
Chatterjee, S.; Madhusoodanan, K.; Rama Rao, A.
2015-01-01
In Pressurised Heavy Water Reactors (PHWRs) fuel bundles are located inside horizontal pressure tubes. Pressure tubes made up of Zr 2.5 wt% Nb undergo degradation during in-service environmental conditions. Measurement of mechanical properties of degraded pressure tubes is important for assessing its fitness for further service in the reactor. The only way to accomplish this important objective is to develop a system based on insitu measurement technique. Considering the importance of such measurement, an In-situ Property Measurement System (IProMS) based on cyclic ball indentation technique has been designed and developed indigenously. The remotely operable system is capable of carrying out indentation trial on the inside surface of the pressure tube and to estimate important mechanical properties like yield strength, ultimate tensile strength, hardness etc. It is known that fracture toughness is one of the important life limiting parameters of the pressure tube. Hence, five spool pieces of Zr 2.5 wt% Nb pressure tube of different mechanical properties have been used for estimation of fracture toughness by ball indentation method. Curved Compact Tension (CCT) specimens were also prepared from the five spool pieces for measurement of fracture toughness from conventional tests. The conventional fracture toughness values were used as reference data. A methodology has been developed to estimate the fracture properties of Zr 2.5 wt% Nb pressure tube material from the analysis of the ball indentation test data. This paper highlights the comparison between tensile properties measured from conventional tests and IProMS trials and relates the fracture toughness parameters measured from conventional tests with the IProMS estimated fracture properties like Indentation Energy to Fracture. (author)
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Representing earthquake ground motion as time varying ARMA model, the instantaneous spectrum can only be determined by the time varying coefficients of the corresponding ARMA model. In this paper, unscented Kalman filter is applied to estimate the time varying coefficients. The comparison between the estimation results of unscented Kalman filter and Kalman filter methods shows that unscented Kalman filter can more precisely represent the distribution of the spectral peaks in time-frequency plane than Kalman filter, and its time and frequency resolution is finer which ensures its better ability to track the local properties of earthquake ground motions and to identify the systems with nonlinearity or abruptness. Moreover, the estimation results of ARMA models with different orders indicate that the theoretical frequency resolving power ofARMA model which was usually ignored in former studies has great effect on the estimation precision of instantaneous spectrum and it should be taken as one of the key factors in order selection of ARMA model.
Estimation of Groundwater Recharge at Pahute Mesa using the Chloride Mass-Balance Method
Energy Technology Data Exchange (ETDEWEB)
Cooper, Clay A [DRI; Hershey, Ronald L [DRI; Healey, John M [DRI; Lyles, Brad F [DRI
2013-07-01
Groundwater recharge on Pahute Mesa was estimated using the chloride mass-balance (CMB) method. This method relies on the conservative properties of chloride to trace its movement from the atmosphere as dry- and wet-deposition through the soil zone and ultimately to the saturated zone. Typically, the CMB method assumes no mixing of groundwater with different chloride concentrations; however, because groundwater is thought to flow into Pahute Mesa from valleys north of Pahute Mesa, groundwater flow rates (i.e., underflow) and chloride concentrations from Kawich Valley and Gold Flat were carefully considered. Precipitation was measured with bulk and tipping-bucket precipitation gauges installed for this study at six sites on Pahute Mesa. These data, along with historical precipitation amounts from gauges on Pahute Mesa and estimates from the PRISM model, were evaluated to estimate mean annual precipitation. Chloride deposition from the atmosphere was estimated by analyzing quarterly samples of wet- and dry-deposition for chloride in the bulk gauges and evaluating chloride wet-deposition amounts measured at other locations by the National Atmospheric Deposition Program. Mean chloride concentrations in groundwater were estimated using data from the UGTA Geochemistry Database, data from other reports, and data from samples collected from emplacement boreholes for this study. Calculations were conducted assuming both no underflow and underflow from Kawich Valley and Gold Flat. Model results estimate recharge to be 30 mm/yr with a standard deviation of 18 mm/yr on Pahute Mesa, for elevations >1800 m amsl. These estimates assume Pahute Mesa recharge mixes completely with underflow from Kawich Valley and Gold Flat. The model assumes that precipitation, chloride concentration in bulk deposition, underflow and its chloride concentration, have been constant over the length of time of recharge.
Methods for estimating the semivariogram
DEFF Research Database (Denmark)
Lophaven, Søren Nymand; Carstensen, Niels Jacob; Rootzen, Helle
2002-01-01
. In the existing literature various methods for modelling the semivariogram have been proposed, while only a few studies have been made on comparing different approaches. In this paper we compare eight approaches for modelling the semivariogram, i.e. six approaches based on least squares estimation...... maximum likelihood performed better than the least squares approaches. We also applied maximum likelihood and least squares estimation to a real dataset, containing measurements of salinity at 71 sampling stations in the Kattegat basin. This showed that the calculation of spatial predictions...
Directory of Open Access Journals (Sweden)
W. Z. Hou
2018-04-01
Full Text Available This paper evaluates the information content for the retrieval of key aerosol microphysical and surface properties for multispectral single-viewing satellite polarimetric measurements cantered at 410, 443, 555, 670, 865, 1610 and 2250 nm over bright land. To conduct the information content analysis, the synthetic data are simulated by the Unified Linearized Vector Radiative Transfer Model (UNLVTM with the intensity and polarization together over bare soil surface for various scenarios. Following the optimal estimation theory, a principal component analysis method is employed to reconstruct the multispectral surface reflectance from 410 nm to 2250 nm, and then integrated with a linear one-parametric BPDF model to represent the contribution of polarized surface reflectance, thus further to decouple the surface-atmosphere contribution from the TOA measurements. Focusing on two different aerosol models with the aerosol optical depth equal to 0.8 at 550 nm, the total DFS and DFS component of each retrieval aerosol and surface parameter are analysed. The DFS results show that the key aerosol microphysical properties, such as the fine- and coarse-mode columnar volume concentration, the effective radius and the real part of complex refractive index at 550 nm, could be well retrieved with the surface parameters simultaneously over bare soil surface type. The findings of this study can provide the guidance to the inversion algorithm development over bright surface land by taking full use of the single-viewing satellite polarimetric measurements.
Hou, W. Z.; Li, Z. Q.; Zheng, F. X.; Qie, L. L.
2018-04-01
This paper evaluates the information content for the retrieval of key aerosol microphysical and surface properties for multispectral single-viewing satellite polarimetric measurements cantered at 410, 443, 555, 670, 865, 1610 and 2250 nm over bright land. To conduct the information content analysis, the synthetic data are simulated by the Unified Linearized Vector Radiative Transfer Model (UNLVTM) with the intensity and polarization together over bare soil surface for various scenarios. Following the optimal estimation theory, a principal component analysis method is employed to reconstruct the multispectral surface reflectance from 410 nm to 2250 nm, and then integrated with a linear one-parametric BPDF model to represent the contribution of polarized surface reflectance, thus further to decouple the surface-atmosphere contribution from the TOA measurements. Focusing on two different aerosol models with the aerosol optical depth equal to 0.8 at 550 nm, the total DFS and DFS component of each retrieval aerosol and surface parameter are analysed. The DFS results show that the key aerosol microphysical properties, such as the fine- and coarse-mode columnar volume concentration, the effective radius and the real part of complex refractive index at 550 nm, could be well retrieved with the surface parameters simultaneously over bare soil surface type. The findings of this study can provide the guidance to the inversion algorithm development over bright surface land by taking full use of the single-viewing satellite polarimetric measurements.
Bayesian Inference Methods for Sparse Channel Estimation
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand
2013-01-01
This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... analysis of the complex prior representation, where we show that the ability to induce sparse estimates of a given prior heavily depends on the inference method used and, interestingly, whether real or complex variables are inferred. We also show that the Bayesian estimators derived from the proposed...
Estimation of Physical Properties of AN-107 Cesium and Technetium Eluate Blend
Energy Technology Data Exchange (ETDEWEB)
Choi, A.S.
2001-06-12
The objective of this study, as defined in the associated test specifications and task technical and quality assurance plan, was to estimate all the physical properties that are required to design the storage and transport facilities for the concentrated cesium and technetium eluates. Specifically, the scope of this study included: (1) modeling of the aqueous electrolyte chemistry of Tank 241-AN-107 Cs and Tc eluate evaporators, (2) process modeling of semi-batch and continuous evaporation operations, (3) determination of the operating vacuum and target endpoint of each evaporator, (4) calculation of the physical properties of the concentrated Cs and Tc eluate blend, and (5) development of the empirical correlations for the physical properties thus estimated.
Order-of-magnitude physics of neutron stars. Estimating their properties from first principles
Energy Technology Data Exchange (ETDEWEB)
Reisenegger, Andreas; Zepeda, Felipe S. [Pontificia Universidad Catolica de Chile, Instituto de Astrofisica, Facultad de Fisica, Macul (Chile)
2016-03-15
We use basic physics and simple mathematics accessible to advanced undergraduate students to estimate the main properties of neutron stars. We set the stage and introduce relevant concepts by discussing the properties of ''everyday'' matter on Earth, degenerate Fermi gases, white dwarfs, and scaling relations of stellar properties with polytropic equations of state. Then, we discuss various physical ingredients relevant for neutron stars and how they can be combined in order to obtain a couple of different simple estimates of their maximum mass, beyond which they would collapse, turning into black holes. Finally, we use the basic structural parameters of neutron stars to briefly discuss their rotational and electromagnetic properties. (orig.)
Order statistics & inference estimation methods
Balakrishnan, N
1991-01-01
The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co
Investigation of MLE in nonparametric estimation methods of reliability function
International Nuclear Information System (INIS)
Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo
2001-01-01
There have been lots of trials to estimate a reliability function. In the ESReDA 20 th seminar, a new method in nonparametric way was proposed. The major point of that paper is how to use censored data efficiently. Generally there are three kinds of approach to estimate a reliability function in nonparametric way, i.e., Reduced Sample Method, Actuarial Method and Product-Limit (PL) Method. The above three methods have some limits. So we suggest an advanced method that reflects censored information more efficiently. In many instances there will be a unique maximum likelihood estimator (MLE) of an unknown parameter, and often it may be obtained by the process of differentiation. It is well known that the three methods generally used to estimate a reliability function in nonparametric way have maximum likelihood estimators that are uniquely exist. So, MLE of the new method is derived in this study. The procedure to calculate a MLE is similar just like that of PL-estimator. The difference of the two is that in the new method, the mass (or weight) of each has an influence of the others but the mass in PL-estimator not
Methods to estimate the genetic risk
International Nuclear Information System (INIS)
Ehling, U.H.
1989-01-01
The estimation of the radiation-induced genetic risk to human populations is based on the extrapolation of results from animal experiments. Radiation-induced mutations are stochastic events. The probability of the event depends on the dose; the degree of the damage dose not. There are two main approaches in making genetic risk estimates. One of these, termed the direct method, expresses risk in terms of expected frequencies of genetic changes induced per unit dose. The other, referred to as the doubling dose method or the indirect method, expresses risk in relation to the observed incidence of genetic disorders now present in man. The advantage of the indirect method is that not only can Mendelian mutations be quantified, but also other types of genetic disorders. The disadvantages of the method are the uncertainties in determining the current incidence of genetic disorders in human and, in addition, the estimasion of the genetic component of congenital anomalies, anomalies expressed later and constitutional and degenerative diseases. Using the direct method we estimated that 20-50 dominant radiation-induced mutations would be expected in 19 000 offspring born to parents exposed in Hiroshima and Nagasaki, but only a small proportion of these mutants would have been detected with the techniques used for the population study. These methods were used to predict the genetic damage from the fallout of the reactor accident at Chernobyl in the vicinity of Southern Germany. The lack of knowledge for the interaction of chemicals with ionizing radiation and the discrepancy between the high safety standards for radiation protection and the low level of knowledge for the toxicological evaluation of chemical mutagens will be emphasized. (author)
A method of estimating log weights.
Charles N. Mann; Hilton H. Lysons
1972-01-01
This paper presents a practical method of estimating the weights of logs before they are yarded. Knowledge of log weights is required to achieve optimum loading of modern yarding equipment. Truckloads of logs are weighed and measured to obtain a local density index (pounds per cubic foot) for a species of logs. The density index is then used to estimate the weights of...
1981-06-01
lar orbital considerations. The basis of light scattering in Raman spec- troscopy is the dipole moment induced in the molecule when hit by the...of 6 25-9 APPENDICES SIMPLE LINEAR REGRESSION B-1 Scatter Plot of Twenty-five Observations of Two Variables B-2 B-2 Violation of Regression Assumptions...parachor values com- puted by the Sugden method and the McGowan method, respectively. Example 12-2 Estimate the normal boiling point for nicotine . (1
Max-Moerbeck, W.; Richards, J. L.; Hovatta, T.; Pavlidou, V.; Pearson, T. J.; Readhead, A. C. S.
2014-11-01
We present a practical implementation of a Monte Carlo method to estimate the significance of cross-correlations in unevenly sampled time series of data, whose statistical properties are modelled with a simple power-law power spectral density. This implementation builds on published methods; we introduce a number of improvements in the normalization of the cross-correlation function estimate and a bootstrap method for estimating the significance of the cross-correlations. A closely related matter is the estimation of a model for the light curves, which is critical for the significance estimates. We present a graphical and quantitative demonstration that uses simulations to show how common it is to get high cross-correlations for unrelated light curves with steep power spectral densities. This demonstration highlights the dangers of interpreting them as signs of a physical connection. We show that by using interpolation and the Hanning sampling window function we are able to reduce the effects of red-noise leakage and to recover steep simple power-law power spectral densities. We also introduce the use of a Neyman construction for the estimation of the errors in the power-law index of the power spectral density. This method provides a consistent way to estimate the significance of cross-correlations in unevenly sampled time series of data.
A Fast LMMSE Channel Estimation Method for OFDM Systems
Directory of Open Access Journals (Sweden)
Zhou Wen
2009-01-01
Full Text Available A fast linear minimum mean square error (LMMSE channel estimation method has been proposed for Orthogonal Frequency Division Multiplexing (OFDM systems. In comparison with the conventional LMMSE channel estimation, the proposed channel estimation method does not require the statistic knowledge of the channel in advance and avoids the inverse operation of a large dimension matrix by using the fast Fourier transform (FFT operation. Therefore, the computational complexity can be reduced significantly. The normalized mean square errors (NMSEs of the proposed method and the conventional LMMSE estimation have been derived. Numerical results show that the NMSE of the proposed method is very close to that of the conventional LMMSE method, which is also verified by computer simulation. In addition, computer simulation shows that the performance of the proposed method is almost the same with that of the conventional LMMSE method in terms of bit error rate (BER.
A Computationally Efficient Method for Polyphonic Pitch Estimation
Directory of Open Access Journals (Sweden)
Ruohua Zhou
2009-01-01
Full Text Available This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.
[Flavouring estimation of quality of grape wines with use of methods of mathematical statistics].
Yakuba, Yu F; Khalaphyan, A A; Temerdashev, Z A; Bessonov, V V; Malinkin, A D
2016-01-01
The questions of forming of wine's flavour integral estimation during the tasting are discussed, the advantages and disadvantages of the procedures are declared. As investigating materials we used the natural white and red wines of Russian manufactures, which were made with the traditional technologies from Vitis Vinifera, straight hybrids, blending and experimental wines (more than 300 different samples). The aim of the research was to set the correlation between the content of wine's nonvolatile matter and wine's tasting quality rating by mathematical statistics methods. The content of organic acids, amino acids and cations in wines were considered as the main factors influencing on the flavor. Basically, they define the beverage's quality. The determination of those components in wine's samples was done by the electrophoretic method «CAPEL». Together with the analytical checking of wine's samples quality the representative group of specialists simultaneously carried out wine's tasting estimation using 100 scores system. The possibility of statistical modelling of correlation of wine's tasting estimation based on analytical data of amino acids and cations determination reasonably describing the wine's flavour was examined. The statistical modelling of correlation between the wine's tasting estimation and the content of major cations (ammonium, potassium, sodium, magnesium, calcium), free amino acids (proline, threonine, arginine) and the taking into account the level of influence on flavour and analytical valuation within fixed limits of quality accordance were done with Statistica. Adequate statistical models which are able to predict tasting estimation that is to determine the wine's quality using the content of components forming the flavour properties have been constructed. It is emphasized that along with aromatic (volatile) substances the nonvolatile matter - mineral substances and organic substances - amino acids such as proline, threonine, arginine
Directory of Open Access Journals (Sweden)
Varvara Sergeyevna Spirina
2015-03-01
Full Text Available Objective to research and elaborate an economicmathematical model of predicting of commercial property attendance by the example of shopping malls based on the estimation of its attraction for consumers. Methods the methodological and theoretical basis for the work was composed of the rules and techniques of elaborating the qualimetry and matrix mechanisms of complex estimation necessary for the estimation and aggregation of factors influencing the choice of a consumersrsquo group among many alternative property venues. Results two mechanisms are elaborated for the complex estimation of commercial property which is necessary to evaluate their attraction for consumers and to predict attendance. By the example of two large shopping malls in Perm Russia it is shown that using both mechanisms in the economicmathematical model of commercial property attendance increases the accuracy of its predictions compared to the traditional Huff model. The reliability of the results is confirmed by the coincidence of the results of calculation and the actual poll data on the shopping malls attendance. Scientific novelty a multifactor model of commercial property attraction for consumers was elaborated by the example of shopping malls parameters of complex estimation mechanisms are defined namely eight parameters influencing the choice of a shopping mall by consumers. The model differs from the traditional Huff model by the number of factors influencing the choice of a shopping mall by consumers and by the higher accuracy of predicting its attendance. Practical significance the economicmathematical models able to predict commercial property attendance can be used for efficient planning of measures to attract consumers to preserve and develop competitive advantages of commercial property. nbsp
A Comparative Study of Distribution System Parameter Estimation Methods
Energy Technology Data Exchange (ETDEWEB)
Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup
2016-07-17
In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.
International Nuclear Information System (INIS)
Suzuki, Yoshio; Kawakami, Yoshiaki; Nakajima, Norihiro
2017-01-01
The method to estimate errors included in observational data and the method to compare numerical results with observational results are investigated toward the verification and validation (V and V) of a seismic simulation. For the method to estimate errors, 144 literatures for the past 5 years (from the year 2010 to 2014) in the structure engineering field and earthquake engineering field where the description about acceleration data is frequent are surveyed. As a result, it is found that some processes to remove components regarded as errors from observational data are used in about 30% of those literatures. Errors are caused by the resolution, the linearity, the temperature coefficient for sensitivity, the temperature coefficient for zero shift, the transverse sensitivity, the seismometer property, the aliasing, and so on. Those processes can be exploited to estimate errors individually. For the method to compare numerical results with observational results, public materials of ASME V and V Symposium 2012-2015, their references, and above 144 literatures are surveyed. As a result, it is found that six methods have been mainly proposed in existing researches. Evaluating those methods using nine items, advantages and disadvantages for those methods are arranged. The method is not well established so that it is necessary to employ those methods by compensating disadvantages and/or to search for a solution to a novel method. (author)
Evaluation of three paediatric weight estimation methods in Singapore.
Loo, Pei Ying; Chong, Shu-Ling; Lek, Ngee; Bautista, Dianne; Ng, Kee Chong
2013-04-01
Rapid paediatric weight estimation methods in the emergency setting have not been evaluated for South East Asian children. This study aims to assess the accuracy and precision of three such methods in Singapore children: Broselow-Luten (BL) tape, Advanced Paediatric Life Support (APLS) (estimated weight (kg) = 2 (age + 4)) and Luscombe (estimated weight (kg) = 3 (age) + 7) formulae. We recruited 875 patients aged 1-10 years in a Paediatric Emergency Department in Singapore over a 2-month period. For each patient, true weight and height were determined. True height was cross-referenced to the BL tape markings and used to derive estimated weight (virtual BL tape method), while patient's round-down age (in years) was used to derive estimated weights using APLS and Luscombe formulae, respectively. The percentage difference between the true and estimated weights was calculated. For each method, the bias and extent of agreement were quantified using Bland-Altman method (mean percentage difference (MPD) and 95% limits of agreement (LOA)). The proportion of weight estimates within 10% of true weight (p₁₀) was determined. The BL tape method marginally underestimated weights (MPD +0.6%; 95% LOA -26.8% to +28.1%; p₁₀ 58.9%). The APLS formula underestimated weights (MPD +7.6%; 95% LOA -26.5% to +41.7%; p₁₀ 45.7%). The Luscombe formula overestimated weights (MPD -7.4%; 95% LOA -51.0% to +36.2%; p₁₀ 37.7%). Of the three methods we evaluated, the BL tape method provided the most accurate and precise weight estimation for Singapore children. The APLS and Luscombe formulae underestimated and overestimated the children's weights, respectively, and were considerably less precise. © 2013 The Authors. Journal of Paediatrics and Child Health © 2013 Paediatrics and Child Health Division (Royal Australasian College of Physicians).
Maximum Likelihood Blood Velocity Estimator Incorporating Properties of Flow Physics
DEFF Research Database (Denmark)
Schlaikjer, Malene; Jensen, Jørgen Arendt
2004-01-01
)-data under investigation. The flow physic properties are exploited in the second term, as the range of velocity values investigated in the cross-correlation analysis are compared to the velocity estimates in the temporal and spatial neighborhood of the signal segment under investigation. The new estimator...... has been compared to the cross-correlation (CC) estimator and the previously developed maximum likelihood estimator (MLE). The results show that the CMLE can handle a larger velocity search range and is capable of estimating even low velocity levels from tissue motion. The CC and the MLE produce...... for the CC and the MLE. When the velocity search range is set to twice the limit of the CC and the MLE, the number of incorrect velocity estimates are 0, 19.1, and 7.2% for the CMLE, CC, and MLE, respectively. The ability to handle a larger search range and estimating low velocity levels was confirmed...
A Channelization-Based DOA Estimation Method for Wideband Signals
Directory of Open Access Journals (Sweden)
Rui Guo
2016-07-01
Full Text Available In this paper, we propose a novel direction of arrival (DOA estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR using direct wideband radio frequency (RF digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.
清水, 千弘; Chihiro, Shimizu
2014-01-01
How exactly should one estimate property investment returns? Investors in property aim to maximize capital gains from price increases and income generated by the property. How are the returns on investment in property determined based on its characteristics, and what kind of market characteristics does it have? Focusing on the Tokyo commer-cial property market and residential property market, the purpose of this paper was to break down and measure the micro-structure of property investment re...
Comparison of density estimators. [Estimation of probability density functions
Energy Technology Data Exchange (ETDEWEB)
Kao, S.; Monahan, J.F.
1977-09-01
Recent work in the field of probability density estimation has included the introduction of some new methods, such as the polynomial and spline methods and the nearest neighbor method, and the study of asymptotic properties in depth. This earlier work is summarized here. In addition, the computational complexity of the various algorithms is analyzed, as are some simulations. The object is to compare the performance of the various methods in small samples and their sensitivity to change in their parameters, and to attempt to discover at what point a sample is so small that density estimation can no longer be worthwhile. (RWR)
Estimating local scaling properties for the classification of interstitial lung disease patterns
Huber, Markus B.; Nagarajan, Mahesh B.; Leinsinger, Gerda; Ray, Lawrence A.; Wismueller, Axel
2011-03-01
Local scaling properties of texture regions were compared in their ability to classify morphological patterns known as 'honeycombing' that are considered indicative for the presence of fibrotic interstitial lung diseases in high-resolution computed tomography (HRCT) images. For 14 patients with known occurrence of honeycombing, a stack of 70 axial, lung kernel reconstructed images were acquired from HRCT chest exams. 241 regions of interest of both healthy and pathological (89) lung tissue were identified by an experienced radiologist. Texture features were extracted using six properties calculated from gray-level co-occurrence matrices (GLCM), Minkowski Dimensions (MDs), and the estimation of local scaling properties with Scaling Index Method (SIM). A k-nearest-neighbor (k-NN) classifier and a Multilayer Radial Basis Functions Network (RBFN) were optimized in a 10-fold cross-validation for each texture vector, and the classification accuracy was calculated on independent test sets as a quantitative measure of automated tissue characterization. A Wilcoxon signed-rank test was used to compare two accuracy distributions including the Bonferroni correction. The best classification results were obtained by the set of SIM features, which performed significantly better than all the standard GLCM and MD features (p < 0.005) for both classifiers with the highest accuracy (94.1%, 93.7%; for the k-NN and RBFN classifier, respectively). The best standard texture features were the GLCM features 'homogeneity' (91.8%, 87.2%) and 'absolute value' (90.2%, 88.5%). The results indicate that advanced texture features using local scaling properties can provide superior classification performance in computer-assisted diagnosis of interstitial lung diseases when compared to standard texture analysis methods.
Estimation of pump operational state with model-based methods
International Nuclear Information System (INIS)
Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha
2010-01-01
Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.
Population Estimation with Mark and Recapture Method Program
International Nuclear Information System (INIS)
Limohpasmanee, W.; Kaewchoung, W.
1998-01-01
Population estimation is the important information which required for the insect control planning especially the controlling with SIT. Moreover, It can be used to evaluate the efficiency of controlling method. Due to the complexity of calculation, the population estimation with mark and recapture methods were not used widely. So that, this program is developed with Qbasic on the purpose to make it accuracy and easier. The program evaluation consists with 6 methods; follow Seber's, Jolly-seber's, Jackson's Ito's, Hamada's and Yamamura's methods. The results are compared with the original methods, found that they are accuracy and more easier to applied
Pedotransfer functions estimating soil hydraulic properties using different soil parameters
DEFF Research Database (Denmark)
Børgesen, Christen Duus; Iversen, Bo Vangsø; Jacobsen, Ole Hørbye
2008-01-01
Estimates of soil hydraulic properties using pedotransfer functions (PTF) are useful in many studies such as hydrochemical modelling and soil mapping. The objective of this study was to calibrate and test parametric PTFs that predict soil water retention and unsaturated hydraulic conductivity...... parameters. The PTFs are based on neural networks and the Bootstrap method using different sets of predictors and predict the van Genuchten/Mualem parameters. A Danish soil data set (152 horizons) dominated by sandy and sandy loamy soils was used in the development of PTFs to predict the Mualem hydraulic...... conductivity parameters. A larger data set (1618 horizons) with a broader textural range was used in the development of PTFs to predict the van Genuchten parameters. The PTFs using either three or seven textural classes combined with soil organic mater and bulk density gave the most reliable predictions...
Properties of estimated characteristic roots
Bent Nielsen; Heino Bohn Nielsen
2008-01-01
Estimated characteristic roots in stationary autoregressions are shown to give rather noisy information about their population equivalents. This is remarkable given the central role of the characteristic roots in the theory of autoregressive processes. In the asymptotic analysis the problems appear when multiple roots are present as this implies a non-differentiablity so the Î´-method does not apply, convergence rates are slow, and the asymptotic distribution is non-normal. In finite samples ...
Estimation of the thermophysical and mechanical properties and the equation of state of Li2O
International Nuclear Information System (INIS)
Krikorian, O.H.
1985-01-01
Correlation methods based on Knoop microhardness and melting points are developed for estimating tensile strength. Young modulus, and Poisson ratio for Li 2 O as a function of grain size, porosity, and temperature. Generalized expressions for extrapolating the existing data on thermal conductivity and thermal expansivity are given. These derived thermophysical data are combined to predict thermal stress factors for Li 2 O. Based on the available vapor pressure data on Li 2 O and empirical correlations for the equation of state in the liquid and vapor phases, estimates of the properties of Li 2 O are made: an approximate critical temperature of 6800+-800 K is obtained. (author)
Soil profile property estimation with field and laboratory VNIR spectroscopy
Diffuse reflectance spectroscopy (DRS) soil sensors have the potential to provide rapid, high-resolution estimation of multiple soil properties. Although many studies have focused on laboratory-based visible and near-infrared (VNIR) spectroscopy of dried soil samples, previous work has demonstrated ...
Estimated value of insurance premium due to Citarum River flood by using Bayesian method
Sukono; Aisah, I.; Tampubolon, Y. R. H.; Napitupulu, H.; Supian, S.; Subiyanto; Sidi, P.
2018-03-01
Citarum river flood in South Bandung, West Java Indonesia, often happens every year. It causes property damage, producing economic loss. The risk of loss can be mitigated by following the flood insurance program. In this paper, we discussed about the estimated value of insurance premiums due to Citarum river flood by Bayesian method. It is assumed that the risk data for flood losses follows the Pareto distribution with the right fat-tail. The estimation of distribution model parameters is done by using Bayesian method. First, parameter estimation is done with assumption that prior comes from Gamma distribution family, while observation data follow Pareto distribution. Second, flood loss data is simulated based on the probability of damage in each flood affected area. The result of the analysis shows that the estimated premium value of insurance based on pure premium principle is as follows: for the loss value of IDR 629.65 million of premium IDR 338.63 million; for a loss of IDR 584.30 million of its premium IDR 314.24 million; and the loss value of IDR 574.53 million of its premium IDR 308.95 million. The premium value estimator can be used as neither a reference in the decision of reasonable premium determination, so as not to incriminate the insured, nor it result in loss of the insurer.
Directory of Open Access Journals (Sweden)
Moreira Paulo H. S.
2016-03-01
Full Text Available In this study the hydraulic and solute transport properties of an unsaturated soil were estimated simultaneously from a relatively simple small-scale laboratory column infiltration/outflow experiment. As governing equations we used the Richards equation for variably saturated flow and a physical non-equilibrium dual-porosity type formulation for solute transport. A Bayesian parameter estimation approach was used in which the unknown parameters were estimated with the Markov Chain Monte Carlo (MCMC method through implementation of the Metropolis-Hastings algorithm. Sensitivity coefficients were examined in order to determine the most meaningful measurements for identifying the unknown hydraulic and transport parameters. Results obtained using the measured pressure head and solute concentration data collected during the unsaturated soil column experiment revealed the robustness of the proposed approach.
Health effects estimation for contaminated properties
International Nuclear Information System (INIS)
Marks, S.; Denham, D.H.; Cross, F.T.; Kennedy, W.E. Jr.
1984-05-01
As part of an overall remedial action program to evaluate the need for and institute actions designed to minimize health hazards from inactive tailings piles and from displaced tailings, methods for estimating health effects from tailings were developed and applied to the Salt Lake City area. 2 references, 2 tables
Higher Order Improvements for Approximate Estimators
DEFF Research Database (Denmark)
Kristensen, Dennis; Salanié, Bernard
Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties of such appr......Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties...... of such approximate estimators at a low computational cost. The first two methods correct the objective function so as to remove the leading term of the bias due to the approximation. One variant provides an analytical bias adjustment, but it only works for estimators based on stochastic approximators......, such as simulation-based estimators. Our second bias correction is based on ideas from the resampling literature; it eliminates the leading bias term for non-stochastic as well as stochastic approximators. Finally, we propose an iterative procedure where we use Newton-Raphson (NR) iterations based on a much finer...
Estimation methods for nonlinear state-space models in ecology
DEFF Research Database (Denmark)
Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro
2011-01-01
The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...
Measurement of uranium dioxide thermophysical properties by the laser flash method
International Nuclear Information System (INIS)
Grossi, Pablo Andrade; Ferreira, Ricardo Alberto Neto; Camarano, Denise das Merces; Andrade, Roberto Marcio de
2009-01-01
The evaluation of the thermophysical properties of uranium dioxide (UO 2 ), including a reliable uncertainty assessment, are required by the nuclear reactor design. These important information are used by thermohydraulic codes to define operational aspects and to assure the safety, when analyzing various potential situations of accident. The laser flash method had become the most popular method to measure the thermophysical properties of materials. Despite its several advantages, some experimental obstacles have been found due to the difficulty to obtain experimentally the ideals initial and boundary conditions required by the original method. An experimental apparatus and a methodology for estimating uncertainties of thermal diffusivity, thermal conductivity and specific heat measurements based on the laser flash method are presented. A stochastic thermal diffusion modeling has been developed and validated by standard samples. Inverse heat conduction problems (IHCPs) solved by finite volumes technique were applied to the measurement process with real initial and boundary conditions, and Monte Carlo Method was used for propagating the uncertainties. The main sources of uncertainty were due to: pulse time, laser power, thermal exchanges, absorptivity, emissivity, sample thickness, specific mass and dynamic influence of temperature measurement system. As results, mean values and uncertainties of thermal diffusivity, thermal conductivity and specific heat of UO 2 are presented. (author)
Estimating Aquifer Properties Using Sinusoidal Pumping Tests
Rasmussen, T. C.; Haborak, K. G.; Young, M. H.
2001-12-01
We develop the theoretical and applied framework for using sinusoidal pumping tests to estimate aquifer properties for confined, leaky, and partially penetrating conditions. The framework 1) derives analytical solutions for three boundary conditions suitable for many practical applications, 2) validates the analytical solutions against a finite element model, 3) establishes a protocol for conducting sinusoidal pumping tests, and 4) estimates aquifer hydraulic parameters based on the analytical solutions. The analytical solutions to sinusoidal stimuli in radial coordinates are derived for boundary value problems that are analogous to the Theis (1935) confined aquifer solution, the Hantush and Jacob (1955) leaky aquifer solution, and the Hantush (1964) partially penetrated confined aquifer solution. The analytical solutions compare favorably to a finite-element solution of a simulated flow domain, except in the region immediately adjacent to the pumping well where the implicit assumption of zero borehole radius is violated. The procedure is demonstrated in one unconfined and two confined aquifer units near the General Separations Area at the Savannah River Site, a federal nuclear facility located in South Carolina. Aquifer hydraulic parameters estimated using this framework provide independent confirmation of parameters obtained from conventional aquifer tests. The sinusoidal approach also resulted in the elimination of investigation-derived wastes.
Energy Technology Data Exchange (ETDEWEB)
Lee, Kyun Ho [Sejong University, Sejong (Korea, Republic of); Kim, Ki Wan [Agency for Defense Development, Daejeon (Korea, Republic of)
2014-09-15
The heat transfer mechanism for radiation is directly related to the emission of photons and electromagnetic waves. Depending on the participation of the medium, the radiation can be classified into two forms: surface and gas radiation. In the present study, unknown radiation properties were estimated using an inverse boundary analysis of surface radiation in an axisymmetric cylindrical enclosure. For efficiency, a repulsive particle swarm optimization (RPSO) algorithm, which is a relatively recent heuristic search method, was used as inverse solver. By comparing the convergence rates and accuracies with the results of a genetic algorithm (GA), the performances of the proposed RPSO algorithm as an inverse solver was verified when applied to the inverse analysis of the surface radiation problem.
International Nuclear Information System (INIS)
Lee, Kyun Ho; Kim, Ki Wan
2014-01-01
The heat transfer mechanism for radiation is directly related to the emission of photons and electromagnetic waves. Depending on the participation of the medium, the radiation can be classified into two forms: surface and gas radiation. In the present study, unknown radiation properties were estimated using an inverse boundary analysis of surface radiation in an axisymmetric cylindrical enclosure. For efficiency, a repulsive particle swarm optimization (RPSO) algorithm, which is a relatively recent heuristic search method, was used as inverse solver. By comparing the convergence rates and accuracies with the results of a genetic algorithm (GA), the performances of the proposed RPSO algorithm as an inverse solver was verified when applied to the inverse analysis of the surface radiation problem
Bin mode estimation methods for Compton camera imaging
International Nuclear Information System (INIS)
Ikeda, S.; Odaka, H.; Uemura, M.; Takahashi, T.; Watanabe, S.; Takeda, S.
2014-01-01
We study the image reconstruction problem of a Compton camera which consists of semiconductor detectors. The image reconstruction is formulated as a statistical estimation problem. We employ a bin-mode estimation (BME) and extend an existing framework to a Compton camera with multiple scatterers and absorbers. Two estimation algorithms are proposed: an accelerated EM algorithm for the maximum likelihood estimation (MLE) and a modified EM algorithm for the maximum a posteriori (MAP) estimation. Numerical simulations demonstrate the potential of the proposed methods
Methods for the estimation of uranium ore reserves
International Nuclear Information System (INIS)
1985-01-01
The Manual is designed mainly to provide assistance in uranium ore reserve estimation methods to mining engineers and geologists with limited experience in estimating reserves, especially to those working in developing countries. This Manual deals with the general principles of evaluation of metalliferous deposits but also takes into account the radioactivity of uranium ores. The methods presented have been generally accepted in the international uranium industry
A method for fast energy estimation and visualization of protein-ligand interaction
Tomioka, Nobuo; Itai, Akiko; Iitaka, Yoichi
1987-10-01
A new computational and graphical method for facilitating ligand-protein docking studies is developed on a three-dimensional computer graphics display. Various physical and chemical properties inside the ligand binding pocket of a receptor protein, whose structure is elucidated by X-ray crystal analysis, are calculated on three-dimensional grid points and are stored in advance. By utilizing those tabulated data, it is possible to estimate the non-bonded and electrostatic interaction energy and the number of possible hydrogen bonds between protein and ligand molecules in real time during an interactive docking operation. The method also provides a comprehensive visualization of the local environment inside the binding pocket. With this method, it becomes easier to find a roughly stable geometry of ligand molecules, and one can therefore make a rapid survey of the binding capability of many drug candidates. The method will be useful for drug design as well as for the examination of protein-ligand interactions.
A new method to determine magnetic properties of the unsaturated-magnetized rotor of a novel gyro
Li, Hai; Liu, Xiaowei; Dong, Changchun; Zhang, Haifeng
2016-06-01
A new method is proposed to determine magnetic properties of the unsaturated-magnetized, small and irregular shaped rotor of a novel gyro. The method is based on finite-element analysis and the measurements of the magnetic flux density distribution, determining magnetic parameters by comparing the magnetic flux intensity distribution differences between the modeling results under different parameters and the measured ones. Experiment on a N30 Grade NdFeB magnet shows that its residual magnetic flux density is 1.10±0.01 T, and coercive field strength is 801±3 kA/m, which are consistent with the given parameters of the material. The method was applied to determine the magnetic properties of the rotor of the gyro, and the magnetic properties acquired were used to predict the open-loop gyro precession frequency. The predicted precession frequency should be larger than 12.9 Hz, which is close to the experimental result 13.5 Hz. The result proves that the method is accurate in estimating the magnetic properties of the rotor of the gyro.
A SOFTWARE RELIABILITY ESTIMATION METHOD TO NUCLEAR SAFETY SOFTWARE
Directory of Open Access Journals (Sweden)
GEE-YONG PARK
2014-02-01
Full Text Available A method for estimating software reliability for nuclear safety software is proposed in this paper. This method is based on the software reliability growth model (SRGM, where the behavior of software failure is assumed to follow a non-homogeneous Poisson process. Two types of modeling schemes based on a particular underlying method are proposed in order to more precisely estimate and predict the number of software defects based on very rare software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating software test cases as a covariate into the model. It was identified that these models are capable of reasonably estimating the remaining number of software defects which directly affects the reactor trip functions. The software reliability might be estimated from these modeling equations, and one approach of obtaining software reliability value is proposed in this paper.
Assessment of microalgae biodiesel fuels using a fuel property estimation methodology
Energy Technology Data Exchange (ETDEWEB)
Torrens, Jonas Colen Ladeia; Vargas, Jose Viriato Coelho; Mariano, Andre Bellin [Center for Research and Development of Sustainable Energy. Universidade Federal do Parana, Curitiba, PR (Brazil)
2010-07-01
Recently, depleting supplies of petroleum and the concerns about global warming are drawing attention to alternative sources of energy. In this context, advanced biofuels, derived from non edible superior plants and microorganisms, are presented as promising options for the transportation sector. Biodiesel, which is the most prominent alternative fuel for compression ignition engines, have a large number as potential feedstock, such as plants (e.g., soybean, canola, palm) and microorganism (i.e., microalgae, yeast, fungi and bacterium). In order to determine their potential, most studies focus on the economic viability, but few discuss the technical viability of producing high quality fuels from such feedstock. Since the fuel properties depend on the composition of the parent oil, and considering the variability of the fatty acid profile found in these organisms, it is clear that the fuels derived may present undesirable properties, e.g., high viscosity, low cetane number, low oxidative stability and poor cold flow properties. Therefore, it is very important to develop ways of analysing the fuel quality prior to production, specially considering the high cost of producing and testing several varieties of plants and microorganisms. In this aim, this work presents the use of fuel properties estimation methods on the assessment of the density, viscosity, cetane number and cold filter plugging point of several microalgae derived biofuels, comparing then to more conventional biodiesel fuels. The information gathered with these methods helps on the selection of species and cultivation parameters, which have a high impact on the derived fuel quality, and have been successfully employed on the Center for Research and Development of Sustainable Energy. The results demonstrate that some species of microalgae have the potential to produce high quality biodiesel if cultivated with optimised conditions, associated with the possibility of obtaining valuable long chain
Evaluation and reliability of bone histological age estimation methods
African Journals Online (AJOL)
Human age estimation at death plays a vital role in forensic anthropology and bioarchaeology. Researchers used morphological and histological methods to estimate human age from their skeletal remains. This paper discussed different histological methods that used human long bones and ribs to determine age ...
Directory of Open Access Journals (Sweden)
Radoi Emanuel
2004-01-01
Full Text Available In order to operate properly, the superresolution methods based on orthogonal subspace decomposition, such as multiple signal classification (MUSIC or estimation of signal parameters by rotational invariance techniques (ESPRIT, need accurate estimation of the signal subspace dimension, that is, of the number of harmonic components that are superimposed and corrupted by noise. This estimation is particularly difficult when the S/N ratio is low and the statistical properties of the noise are unknown. Moreover, in some applications such as radar imagery, it is very important to avoid underestimation of the number of harmonic components which are associated to the target scattering centers. In this paper, we propose an effective method for the estimation of the signal subspace dimension which is able to operate against colored noise with performances superior to those exhibited by the classical information theoretic criteria of Akaike and Rissanen. The capabilities of the new method are demonstrated through computer simulations and it is proved that compared to three other methods it carries out the best trade-off from four points of view, S/N ratio in white noise, frequency band of colored noise, dynamic range of the harmonic component amplitudes, and computing time.
Simple estimating method of damages of concrete gravity dam based on linear dynamic analysis
Energy Technology Data Exchange (ETDEWEB)
Sasaki, T.; Kanenawa, K.; Yamaguchi, Y. [Public Works Research Institute, Tsukuba, Ibaraki (Japan). Hydraulic Engineering Research Group
2004-07-01
Due to the occurrence of large earthquakes like the Kobe Earthquake in 1995, there is a strong need to verify seismic resistance of dams against much larger earthquake motions than those considered in the present design standard in Japan. Problems exist in using nonlinear analysis to evaluate the safety of dams including: that the influence which the set material properties have on the results of nonlinear analysis is large, and that the results of nonlinear analysis differ greatly according to the damage estimation models or analysis programs. This paper reports the evaluation indices based on a linear dynamic analysis method and the characteristics of the progress of cracks in concrete gravity dams with different shapes using a nonlinear dynamic analysis method. The study concludes that if simple linear dynamic analysis is appropriately conducted to estimate tensile stress at potential locations of initiating cracks, the damage due to cracks would be predicted roughly. 4 refs., 1 tab., 13 figs.
Lika, Konstadia; Kearney, Michael R.; Kooijman, Sebastiaan A. L. M.
2011-11-01
The covariation method for estimating the parameters of the standard Dynamic Energy Budget (DEB) model provides a single-step method of accessing all the core DEB parameters from commonly available empirical data. In this study, we assess the robustness of this parameter estimation procedure and analyse the role of pseudo-data using elasticity coefficients. In particular, we compare the performance of Maximum Likelihood (ML) vs. Weighted Least Squares (WLS) approaches and find that the two approaches tend to converge in performance as the number of uni-variate data sets increases, but that WLS is more robust when data sets comprise single points (zero-variate data). The efficiency of the approach is shown to be high, and the prior parameter estimates (pseudo-data) have very little influence if the real data contain information about the parameter values. For instance, the effects of the pseudo-value for the allocation fraction κ is reduced when there is information for both growth and reproduction, that for the energy conductance is reduced when information on age at birth and puberty is given, and the effects of the pseudo-value for the maturity maintenance rate coefficient are insignificant. The estimation of some parameters (e.g., the zoom factor and the shape coefficient) requires little information, while that of others (e.g., maturity maintenance rate, puberty threshold and reproduction efficiency) require data at several food levels. The generality of the standard DEB model, in combination with the estimation of all of its parameters, allows comparison of species on the basis of parameter values. We discuss a number of preliminary patterns emerging from the present collection of parameter estimates across a wide variety of taxa. We make the observation that the estimated value of the fraction κ of mobilised reserve that is allocated to soma is far away from the value that maximises reproduction. We recognise this as the reason why two very different
Montero-Lorenzo, José-María; Larraz-Iribas, Beatriz; Páez, Antonio
2009-12-01
A vast majority of the recent literature on spatial hedonic analysis has been concerned with residential property values, with only very few examples of studies focused on commercial property prices. The dearth of studies can be attributed to some of the challenges faced in the analysis of commercial properties, in particular the scarcity of information compared to residential transactions. In order to address this issue, in this paper we propose the use of cokriging and housing prices as ancillary information to estimate commercial property prices. Cokriging takes into account the spatial autocorrelation structure of property prices, and the use of more abundant information on housing prices helps to improve the accuracy of property value estimates. A case study of Toledo in Spain, a city for which commercial activity stemming from tourism is one of the key elements of the economy in the city, demonstrates that substantial accuracy and precision gains can be obtained from the use of cokriging.
Galaxy-galaxy lensing estimators and their covariance properties
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uroš; Slosar, Anže; Vazquez Gonzalez, Jose
2017-11-01
We study the covariance properties of real space correlation function estimators - primarily galaxy-shear correlations, or galaxy-galaxy lensing - using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.
Galaxy–galaxy lensing estimators and their covariance properties
International Nuclear Information System (INIS)
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros; Slosar, Anze; Gonzalez, Jose Vazquez
2017-01-01
Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.
Sediment spatial distribution evaluated by three methods and its relation to some soil properties
Energy Technology Data Exchange (ETDEWEB)
Bacchi, O O.S. . [Centro de Energia Nuclear na Agricultura-CENA/USP, Laboratorio de Fisica do Solo, Piracicaba, SP (Brazil); Reichardt, K [Centro de Energia Nuclear na Agricultura-CENA/USP, Laboratorio de Fisica do Solo, Piracicaba, SP (Brazil); Departamento de Ciencias Exatas, Escola Superior de Agricultura ' Luiz de Queiroz' ESALQ/USP, Piracicaba, SP (Brazil); Sparovek, G [Departamento de Solos e Nutricao de Plantas, Escola Superior de Agricultura ' Luiz de Queiroz' ESALQ/USP, Piracicaba, SP (Brazil)
2003-02-15
An investigation of rates and spatial distribution of sediments on an agricultural field cultivated with sugarcane was undertaken using the {sup 137}Cs technique, USLE and WEPP models. The study was carried out on the Ceveiro watershed of the Piracicaba river basin, state of Sao Paulo, Brazil, experiencing severe soil degradation due to soil erosion. The objectives of the study were to compare the spatial distribution of sediments evaluated by the three methods and its relation to some soil properties. Erosion and sedimentation rates and their spatial distribution estimated by the three methods were completely different. Although not able to show sediment deposition, the spatial distribution of erosion rates evaluated by USLE presented the best correlation with other studied soil properties. (author)
Directory of Open Access Journals (Sweden)
Kim Hyang-Mi
2012-09-01
exposures observed with error. However, compared with CEM, CGBS is easier to implement and has more desirable bias-reducing properties in the presence of substantial proportions of missing exposure data. Conclusion The CGBS approach could be useful for estimating exposure-disease association in semi-ecological studies when the true group means are ordered and the number of measured exposures in each group is small. These findings have important implication for cost-effective design of semi-ecological studies because they enable investigators to more reliably estimate exposure-disease associations with smaller exposure measurement campaign than with the analytical methods that were historically employed.
Minnis, Patrick; Hong, Gang; Sun-Mack, Szedung; Smith, William L.; Chen, Yan; Miller, Steven D.
2016-05-01
Retrieval of ice cloud properties using IR measurements has a distinct advantage over the visible and near-IR techniques by providing consistent monitoring regardless of solar illumination conditions. Historically, the IR bands at 3.7, 6.7, 11.0, and 12.0 µm have been used to infer ice cloud parameters by various methods, but the reliable retrieval of ice cloud optical depth τ is limited to nonopaque cirrus with τ < 8. The Ice Cloud Optical Depth from Infrared using a Neural network (ICODIN) method is developed in this paper by training Moderate Resolution Imaging Spectroradiometer (MODIS) radiances at 3.7, 6.7, 11.0, and 12.0 µm against CloudSat-estimated τ during the nighttime using 2 months of matched global data from 2007. An independent data set comprising observations from the same 2 months of 2008 was used to validate the ICODIN. One 4-channel and three 3-channel versions of the ICODIN were tested. The training and validation results show that IR channels can be used to estimate ice cloud τ up to 150 with correlations above 78% and 69% for all clouds and only opaque ice clouds, respectively. However, τ for the deepest clouds is still underestimated in many instances. The corresponding RMS differences relative to CloudSat are ~100 and ~72%. If the opaque clouds are properly identified with the IR methods, the RMS differences in the retrieved optical depths are ~62%. The 3.7 µm channel appears to be most sensitive to optical depth changes but is constrained by poor precision at low temperatures. A method for estimating total optical depth is explored for estimation of cloud water path in the future. Factors affecting the uncertainties and potential improvements are discussed. With improved techniques for discriminating between opaque and semitransparent ice clouds, the method can ultimately improve cloud property monitoring over the entire diurnal cycle.
Mechanical properties of porcine brain tissue in vivo and ex vivo estimated by MR elastography.
Guertler, Charlotte A; Okamoto, Ruth J; Schmidt, John L; Badachhape, Andrew A; Johnson, Curtis L; Bayly, Philip V
2018-03-01
The mechanical properties of brain tissue in vivo determine the response of the brain to rapid skull acceleration. These properties are thus of great interest to the developers of mathematical models of traumatic brain injury (TBI) or neurosurgical simulations. Animal models provide valuable insight that can improve TBI modeling. In this study we compare estimates of mechanical properties of the Yucatan mini-pig brain in vivo and ex vivo using magnetic resonance elastography (MRE) at multiple frequencies. MRE allows estimations of properties in soft tissue, either in vivo or ex vivo, by imaging harmonic shear wave propagation. Most direct measurements of brain mechanical properties have been performed using samples of brain tissue ex vivo. It has been observed that direct estimates of brain mechanical properties depend on the frequency and amplitude of loading, as well as the time post-mortem and condition of the sample. Using MRE in the same animals at overlapping frequencies, we observe that porcine brain tissue in vivo appears stiffer than porcine brain tissue samples ex vivo at frequencies of 100 Hz and 125 Hz, but measurements show closer agreement at lower frequencies. Copyright © 2018 Elsevier Ltd. All rights reserved.
Development of an in vivo method for determining material properties of passive myocardium
Directory of Open Access Journals (Sweden)
Espen Remme
2004-10-01
Full Text Available Calculation of mechanical stresses and strains in the left ventricular (LV myocardium by the finite element (FE method relies on adequate knowledge of the material properties of myocardial tissue. In this paper we present a model-based estimation procedure to characterize the stress-strain relationship in passive LV myocardium. A 3D FE model of the LV myocardium was used, which included morphological fiber and sheet structure and a nonlinear orthotropic constitutive law with different stiffness in the fiber, sheet and sheet-normal directions. The estimation method was based on measured wall strains. We analyzed the method's ability to estimate the material parameters by generating a set of synthetic strain data by simulating the LV inflation phase with known material parameters. In this way we were able to verify the correctness of the solution and to analyze the effects of measurement and model error on the solution accuracy and stability. A sensitivity analysis was performed to investigate the observability of the material parameters and to determine which parameters to estimate. The results showed a high degree of coupling between the parameters governing the stiffness in each direction. Thus, only one parameter in each of the three directions was estimated. For the tested magnitudes of added noise and introduced model errors, the resulting estimated stress-strain characteristics in the fiber and sheet directions converged with good accuracy to the known relationship. The sheet-normal stress-strain relationship had a higher degree of uncertainty as more noise was added and model error was introduced.
Study on Top-Down Estimation Method of Software Project Planning
Institute of Scientific and Technical Information of China (English)
ZHANG Jun-guang; L(U) Ting-jie; ZHAO Yu-mei
2006-01-01
This paper studies a new software project planning method under some actual project data in order to make software project plans more effective. From the perspective of system theory, our new method regards a software project plan as an associative unit for study. During a top-down estimation of a software project, Program Evaluation and Review Technique (PERT) method and analogy method are combined to estimate its size, then effort estimation and specific schedules are obtained according to distributions of the phase effort. This allows a set of practical and feasible planning methods to be constructed. Actual data indicate that this set of methods can lead to effective software project planning.
A new method for estimating carbon dioxide emissions from transportation at fine spatial scales
Energy Technology Data Exchange (ETDEWEB)
Shu Yuqin [School of Geographical Science, South China Normal University, Guangzhou 510631 (China); Lam, Nina S N; Reams, Margaret, E-mail: gis_syq@126.com, E-mail: nlam@lsu.edu, E-mail: mreams@lsu.edu [Department of Environmental Sciences, Louisiana State University, Baton Rouge, 70803 (United States)
2010-10-15
Detailed estimates of carbon dioxide (CO{sub 2}) emissions at fine spatial scales are useful to both modelers and decision makers who are faced with the problem of global warming and climate change. Globally, transport related emissions of carbon dioxide are growing. This letter presents a new method based on the volume-preserving principle in the areal interpolation literature to disaggregate transportation-related CO{sub 2} emission estimates from the county-level scale to a 1 km{sup 2} grid scale. The proposed volume-preserving interpolation (VPI) method, together with the distance-decay principle, were used to derive emission weights for each grid based on its proximity to highways, roads, railroads, waterways, and airports. The total CO{sub 2} emission value summed from the grids within a county is made to be equal to the original county-level estimate, thus enforcing the volume-preserving property. The method was applied to downscale the transportation-related CO{sub 2} emission values by county (i.e. parish) for the state of Louisiana into 1 km{sup 2} grids. The results reveal a more realistic spatial pattern of CO{sub 2} emission from transportation, which can be used to identify the emission 'hot spots'. Of the four highest transportation-related CO{sub 2} emission hotspots in Louisiana, high-emission grids literally covered the entire East Baton Rouge Parish and Orleans Parish, whereas CO{sub 2} emission in Jefferson Parish (New Orleans suburb) and Caddo Parish (city of Shreveport) were more unevenly distributed. We argue that the new method is sound in principle, flexible in practice, and the resultant estimates are more accurate than previous gridding approaches.
Comparison of Optimization and Two-point Methods in Estimation of Soil Water Retention Curve
Ghanbarian-Alavijeh, B.; Liaghat, A. M.; Huang, G.
2009-04-01
Soil water retention curve (SWRC) is one of the soil hydraulic properties in which its direct measurement is time consuming and expensive. Since, its measurement is unavoidable in study of environmental sciences i.e. investigation of unsaturated hydraulic conductivity and solute transport, in this study the attempt is to predict soil water retention curve from two measured points. By using Cresswell and Paydar (1996) method (two-point method) and an optimization method developed in this study on the basis of two points of SWRC, parameters of Tyler and Wheatcraft (1990) model (fractal dimension and air entry value) were estimated and then water content at different matric potentials were estimated and compared with their measured values (n=180). For each method, we used both 3 and 1500 kPa (case 1) and 33 and 1500 kPa (case 2) as two points of SWRC. The calculated RMSE values showed that in the Creswell and Paydar (1996) method, there exists no significant difference between case 1 and case 2. However, the calculated RMSE value in case 2 (2.35) was slightly less than case 1 (2.37). The results also showed that the developed optimization method in this study had significantly less RMSE values for cases 1 (1.63) and 2 (1.33) rather than Cresswell and Paydar (1996) method.
METHODICAL APPROACH TO AN ESTIMATION OF PROFESSIONALISM OF AN EMPLOYEE
Directory of Open Access Journals (Sweden)
Татьяна Александровна Коркина
2013-08-01
Full Text Available Analysis of definitions of «professionalism», reflecting the different viewpoints of scientists and practitioners, has shown that it is interpreted as a specific property of the people effectively and reliably carry out labour activity in a variety of conditions. The article presents the methodical approach to an estimation of professionalism of the employee from the position as the external manifestations of the reliability and effectiveness of the work and the position of the personal characteristics of the employee, determining the results of his work. This approach includes the assessment of the level of qualification and motivation of the employee for each key job functions as well as the final results of its implementation on the criteria of efficiency and reliability. The proposed methodological approach to the estimation of professionalism of the employee allows to identify «bottlenecks» in the structure of its labour functions and to define directions of development of the professional qualities of the worker to ensure the required level of reliability and efficiency of the obtained results.DOI: http://dx.doi.org/10.12731/2218-7405-2013-6-11
An improved method for estimating the frequency correlation function
Chelli, Ali; Pä tzold, Matthias
2012-01-01
For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.
An improved method for estimating the frequency correlation function
Chelli, Ali
2012-04-01
For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.
A comparison of analysis methods to estimate contingency strength.
Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T
2018-05-09
To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.
Plant-available soil water capacity: estimation methods and implications
Directory of Open Access Journals (Sweden)
Bruno Montoani Silva
2014-04-01
Full Text Available The plant-available water capacity of the soil is defined as the water content between field capacity and wilting point, and has wide practical application in planning the land use. In a representative profile of the Cerrado Oxisol, methods for estimating the wilting point were studied and compared, using a WP4-T psychrometer and Richards chamber for undisturbed and disturbed samples. In addition, the field capacity was estimated by the water content at 6, 10, 33 kPa and by the inflection point of the water retention curve, calculated by the van Genuchten and cubic polynomial models. We found that the field capacity moisture determined at the inflection point was higher than by the other methods, and that even at the inflection point the estimates differed, according to the model used. By the WP4-T psychrometer, the water content was significantly lower found the estimate of the permanent wilting point. We concluded that the estimation of the available water holding capacity is markedly influenced by the estimation methods, which has to be taken into consideration because of the practical importance of this parameter.
Statistical Methods and Sampling Design for Estimating Step Trends in Surface-Water Quality
Hirsch, Robert M.
1988-01-01
This paper addresses two components of the problem of estimating the magnitude of step trends in surface water quality. The first is finding a robust estimator appropriate to the data characteristics expected in water-quality time series. The J. L. Hodges-E. L. Lehmann class of estimators is found to be robust in comparison to other nonparametric and moment-based estimators. A seasonal Hodges-Lehmann estimator is developed and shown to have desirable properties. Second, the effectiveness of various sampling strategies is examined using Monte Carlo simulation coupled with application of this estimator. The simulation is based on a large set of total phosphorus data from the Potomac River. To assure that the simulated records have realistic properties, the data are modeled in a multiplicative fashion incorporating flow, hysteresis, seasonal, and noise components. The results demonstrate the importance of balancing the length of the two sampling periods and balancing the number of data values between the two periods.
Nonparametric methods for volatility density estimation
Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.
2009-01-01
Stochastic volatility modelling of financial processes has become increasingly popular. The proposed models usually contain a stationary volatility process. We will motivate and review several nonparametric methods for estimation of the density of the volatility process. Both models based on
Fusion rule estimation using vector space methods
International Nuclear Information System (INIS)
Rao, N.S.V.
1997-01-01
In a system of N sensors, the sensor S j , j = 1, 2 .... N, outputs Y (j) element-of Re, according to an unknown probability distribution P (Y(j) /X) , corresponding to input X element-of [0, 1]. A training n-sample (X 1 , Y 1 ), (X 2 , Y 2 ), ..., (X n , Y n ) is given where Y i = (Y i (1) , Y i (2) , . . . , Y i N ) such that Y i (j) is the output of S j in response to input X i . The problem is to estimate a fusion rule f : Re N → [0, 1], based on the sample, such that the expected square error is minimized over a family of functions Y that constitute a vector space. The function f* that minimizes the expected error cannot be computed since the underlying densities are unknown, and only an approximation f to f* is feasible. We estimate the sample size sufficient to ensure that f provides a close approximation to f* with a high probability. The advantages of vector space methods are two-fold: (a) the sample size estimate is a simple function of the dimensionality of F, and (b) the estimate f can be easily computed by well-known least square methods in polynomial time. The results are applicable to the classical potential function methods and also (to a recently proposed) special class of sigmoidal feedforward neural networks
A Benchmark Estimate for the Capital Stock. An Optimal Consistency Method
Jose Miguel Albala-Bertrand
2001-01-01
There are alternative methods to estimate a capital stock for a benchmark year. These methods, however, do not allow for an independent check, which could establish whether the estimated benchmark level is too high or too low. I propose here an optimal consistency method (OCM), which may allow estimating a capital stock level for a benchmark year and/or checking the consistency of alternative estimates of a benchmark capital stock.
Jonard, François
2015-06-01
In this paper, we experimentally analyzed the feasibility of estimating soil hydraulic properties from 1.4 GHz radiometer and 0.8-2.6 GHz ground-penetrating radar (GPR) data. Radiometer and GPR measurements were performed above a sand box, which was subjected to a series of vertical water content profiles in hydrostatic equilibrium with a water table located at different depths. A coherent radiative transfer model was used to simulate brightness temperatures measured with the radiometer. GPR data were modeled using full-wave layered medium Green\\'s functions and an intrinsic antenna representation. These forward models were inverted to optimally match the corresponding passive and active microwave data. This allowed us to reconstruct the water content profiles, and thereby estimate the sand water retention curve described using the van Genuchten model. Uncertainty of the estimated hydraulic parameters was quantified using the Bayesian-based DREAM algorithm. For both radiometer and GPR methods, the results were in close agreement with in situ time-domain reflectometry (TDR) estimates. Compared with radiometer and TDR, much smaller confidence intervals were obtained for GPR, which was attributed to its relatively large bandwidth of operation, including frequencies smaller than 1.4 GHz. These results offer valuable insights into future potential and emerging challenges in the development of joint analyses of passive and active remote sensing data to retrieve effective soil hydraulic properties.
Directory of Open Access Journals (Sweden)
T. Deshler
2010-05-01
Full Text Available Stratospheric aerosol particles under non-volcanic conditions are typically smaller than 0.1 μm. Due to fundamental limitations of the scattering theory in the Rayleigh limit, these tiny particles are hard to measure by satellite instruments. As a consequence, current estimates of global aerosol properties retrieved from spectral aerosol extinction measurements tend to be strongly biased. Aerosol surface area densities, for instance, are observed to be about 40% smaller than those derived from correlative in situ measurements (Deshler et al., 2003. An accurate knowledge of the global distribution of aerosol properties is, however, essential to better understand and quantify the role they play in atmospheric chemistry, dynamics, radiation and climate. To address this need a new retrieval algorithm was developed, which employs a nonlinear Optimal Estimation (OE method to iteratively solve for the monomodal size distribution parameters which are statistically most consistent with both the satellite-measured multi-wavelength aerosol extinction data and a priori information. By thus combining spectral extinction measurements (at visible to near infrared wavelengths with prior knowledge of aerosol properties at background level, even the smallest particles are taken into account which are practically invisible to optical remote sensing instruments. The performance of the OE retrieval algorithm was assessed based on synthetic spectral extinction data generated from both monomodal and small-mode-dominant bimodal sulphuric acid aerosol size distributions. For monomodal background aerosol, the new algorithm was shown to fairly accurately retrieve the particle sizes and associated integrated properties (surface area and volume densities, even in the presence of large extinction uncertainty. The associated retrieved uncertainties are a good estimate of the true errors. In the case of bimodal background aerosol, where the retrieved (monomodal size
Wurl, D.; Grainger, R. G.; McDonald, A. J.; Deshler, T.
2010-05-01
Stratospheric aerosol particles under non-volcanic conditions are typically smaller than 0.1 μm. Due to fundamental limitations of the scattering theory in the Rayleigh limit, these tiny particles are hard to measure by satellite instruments. As a consequence, current estimates of global aerosol properties retrieved from spectral aerosol extinction measurements tend to be strongly biased. Aerosol surface area densities, for instance, are observed to be about 40% smaller than those derived from correlative in situ measurements (Deshler et al., 2003). An accurate knowledge of the global distribution of aerosol properties is, however, essential to better understand and quantify the role they play in atmospheric chemistry, dynamics, radiation and climate. To address this need a new retrieval algorithm was developed, which employs a nonlinear Optimal Estimation (OE) method to iteratively solve for the monomodal size distribution parameters which are statistically most consistent with both the satellite-measured multi-wavelength aerosol extinction data and a priori information. By thus combining spectral extinction measurements (at visible to near infrared wavelengths) with prior knowledge of aerosol properties at background level, even the smallest particles are taken into account which are practically invisible to optical remote sensing instruments. The performance of the OE retrieval algorithm was assessed based on synthetic spectral extinction data generated from both monomodal and small-mode-dominant bimodal sulphuric acid aerosol size distributions. For monomodal background aerosol, the new algorithm was shown to fairly accurately retrieve the particle sizes and associated integrated properties (surface area and volume densities), even in the presence of large extinction uncertainty. The associated retrieved uncertainties are a good estimate of the true errors. In the case of bimodal background aerosol, where the retrieved (monomodal) size distributions naturally
Directory of Open Access Journals (Sweden)
Xueqiang Chen
2015-01-01
Full Text Available We consider the computationally efficient direction-of-arrival (DOA and noncircular (NC phase estimation problem of noncircular signal for uniform linear array. The key idea is to apply the noncircular propagator method (NC-PM which does not require eigenvalue decomposition (EVD of the covariance matrix or singular value decomposition (SVD of the received data. Noncircular rotational invariance propagator method (NC-RI-PM avoids spectral peak searching in PM and can obtain the closed-form solution of DOA, so it has lower computational complexity. An improved NC-RI-PM algorithm of noncircular signal for uniform linear array is proposed to estimate the elevation angles and noncircular phases with automatic pairing. We reconstruct the extended array output by combining the array output and its conjugated counterpart. Our algorithm fully uses the extended array elements in the improved propagator matrix to estimate the elevation angles and noncircular phases by utilizing the rotational invariance property between subarrays. Compared with NC-RI-PM, the proposed algorithm has better angle estimation performance and much lower computational load. The computational complexity of the proposed algorithm is analyzed. We also derive the variance of estimation error and Cramer-Rao bound (CRB of noncircular signal for uniform linear array. Finally, simulation results are presented to demonstrate the effectiveness of our algorithm.
Directory of Open Access Journals (Sweden)
P. Ribereau
2008-12-01
Full Text Available Since the pioneering work of Landwehr et al. (1979, Hosking et al. (1985 and their collaborators, the Probability Weighted Moments (PWM method has been very popular, simple and efficient to estimate the parameters of the Generalized Extreme Value (GEV distribution when modeling the distribution of maxima (e.g., annual maxima of precipitations in the Identically and Independently Distributed (IID context. When the IID assumption is not satisfied, a flexible alternative, the Maximum Likelihood Estimation (MLE approach offers an elegant way to handle non-stationarities by letting the GEV parameters to be time dependent. Despite its qualities, the MLE applied to the GEV distribution does not always provide accurate return level estimates, especially for small sample sizes or heavy tails. These drawbacks are particularly true in some non-stationary situations. To reduce these negative effects, we propose to extend the PWM method to a more general framework that enables us to model temporal covariates and provide accurate GEV-based return levels. Theoretical properties of our estimators are discussed. Small and moderate sample sizes simulations in a non-stationary context are analyzed and two brief applications to annual maxima of CO_{2} and seasonal maxima of cumulated daily precipitations are presented.
Structural Reliability Using Probability Density Estimation Methods Within NESSUS
Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric
2003-01-01
A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been
International Nuclear Information System (INIS)
Khakural, B.R.; Robert, P.C.; Hugins, D.R.
1998-01-01
There is a growing interest in real-time estimation of soil moisture for site-specific crop management. Non-contacting electromagnetic inductive (EMI) methods have potentials to provide real-time estimate of soil profile water contents. Soil profile water contents were monitored with a neutron probe at selected sites. A Geonics LTD EM-38 terrain meter was used to record bulk soil electrical conductivity (EC(A)) readings across a soil-landscape in West central Minnesota with variable moisture regimes. The relationships among EC(A), selected soil and landscape properties were examined. Bulk soil electrical conductivity (0-1.0 and 0-0.5 m) was negatively correlated with relative elevation. It was positively correlated with soil profile (1.0 m) clay content and negatively correlated with soil profile coarse fragments (2 mm) and sand content. There was significant linear relationship between ECA (0-1.0 and 0-0.5) and soil profile water storage. Soil water storage estimated from ECA reflected changes in landscape and soil characteristics
Motion estimation using point cluster method and Kalman filter.
Senesh, M; Wolf, A
2009-05-01
The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal
Application of Statistical Methods of Rain Rate Estimation to Data From The TRMM Precipitation Radar
Meneghini, R.; Jones, J. A.; Iguchi, T.; Okamoto, K.; Liao, L.; Busalacchi, Antonio J. (Technical Monitor)
2000-01-01
The TRMM Precipitation Radar is well suited to statistical methods in that the measurements over any given region are sparsely sampled in time. Moreover, the instantaneous rain rate estimates are often of limited accuracy at high rain rates because of attenuation effects and at light rain rates because of receiver sensitivity. For the estimation of the time-averaged rain characteristics over an area both errors are relevant. By enlarging the space-time region over which the data are collected, the sampling error can be reduced. However. the bias and distortion of the estimated rain distribution generally will remain if estimates at the high and low rain rates are not corrected. In this paper we use the TRMM PR data to investigate the behavior of 2 statistical methods the purpose of which is to estimate the rain rate over large space-time domains. Examination of large-scale rain characteristics provides a useful starting point. The high correlation between the mean and standard deviation of rain rate implies that the conditional distribution of this quantity can be approximated by a one-parameter distribution. This property is used to explore the behavior of the area-time-integral (ATI) methods where fractional area above a threshold is related to the mean rain rate. In the usual application of the ATI method a correlation is established between these quantities. However, if a particular form of the rain rate distribution is assumed and if the ratio of the mean to standard deviation is known, then not only the mean but the full distribution can be extracted from a measurement of fractional area above a threshold. The second method is an extension of this idea where the distribution is estimated from data over a range of rain rates chosen in an intermediate range where the effects of attenuation and poor sensitivity can be neglected. The advantage of estimating the distribution itself rather than the mean value is that it yields the fraction of rain contributed by
Estimation of soil profile properties using field and laboratory VNIR spectroscopy
Diffuse reflectance spectroscopy (DRS) soil sensors have the potential to provide rapid, high-resolution estimation of multiple soil properties. Although many studies have focused on laboratory-based visible and near-infrared (VNIR) spectroscopy of dried soil samples, previous work has demonstrated ...
Omaraa, Ehsan; Saman, Wasim; Bruno, Frank; Liu, Ming
2017-06-01
Latent heat storage using phase change materials (PCMs) can be used to store large amounts of energy in a narrow temperature difference during phase transition. The thermophysical properties of PCMs such as latent heat, specific heat and melting and solidification temperature need to be defined at high precision for the design and estimating the cost of latent heat storage systems. The existing laboratory standard methods, such as differential thermal analysis (DTA) and differential scanning calorimetry (DSC), use a small sample size (1-10 mg) to measure thermophysical properties, which makes these methods suitable for homogeneous elements. In addition, this small amount of sample has different thermophysical properties when compared with the bulk sample and may have limitations for evaluating the properties of mixtures. To avoid the drawbacks in existing methods, the temperature - history (T-history) method can be used with bulk quantities of PCM salt mixtures to characterize PCMs. This paper presents a modified T-history setup, which was designed and built at the University of South Australia to measure the melting point, heat of fusion, specific heat, degree of supercooling and phase separation of salt mixtures for a temperature range between 200 °C and 400 °C. Sodium Nitrate (NaNO3) was used to verify the accuracy of the new setup.
Assaraf, Roland
2014-12-01
We show that the recently proposed correlated sampling without reweighting procedure extends the locality (asymptotic independence of the system size) of a physical property to the statistical fluctuations of its estimator. This makes the approach potentially vastly more efficient for computing space-localized properties in large systems compared with standard correlated methods. A proof is given for a large collection of noninteracting fragments. Calculations on hydrogen chains suggest that this behavior holds not only for systems displaying short-range correlations, but also for systems with long-range correlations.
A new rapid method for rockfall energies and distances estimation
Giacomini, Anna; Ferrari, Federica; Thoeni, Klaus; Lambert, Cedric
2016-04-01
Rockfalls are characterized by long travel distances and significant energies. Over the last decades, three main methods have been proposed in the literature to assess the rockfall runout: empirical, process-based and GIS-based methods (Dorren, 2003). Process-based methods take into account the physics of rockfall by simulating the motion of a falling rock along a slope and they are generally based on a probabilistic rockfall modelling approach that allows for taking into account the uncertainties associated with the rockfall phenomenon. Their application has the advantage of evaluating the energies, bounce heights and distances along the path of a falling block, hence providing valuable information for the design of mitigation measures (Agliardi et al., 2009), however, the implementation of rockfall simulations can be time-consuming and data-demanding. This work focuses on the development of a new methodology for estimating the expected kinetic energies and distances of the first impact at the base of a rock cliff, subject to the conditions that the geometry of the cliff and the properties of the representative block are known. The method is based on an extensive two-dimensional sensitivity analysis, conducted by means of kinematic simulations based on probabilistic modelling of two-dimensional rockfall trajectories (Ferrari et al., 2016). To take into account for the uncertainty associated with the estimation of the input parameters, the study was based on 78400 rockfall scenarios performed by systematically varying the input parameters that are likely to affect the block trajectory, its energy and distance at the base of the rock wall. The variation of the geometry of the rock cliff (in terms of height and slope angle), the roughness of the rock surface and the properties of the outcropping material were considered. A simplified and idealized rock wall geometry was adopted. The analysis of the results allowed finding empirical laws that relate impact energies
Asymptotic Estimates and Qualitatives Properties of an Elliptic Problem in Dimension Two
Mehdi, Khalil El; Grossi, Massimo
2003-01-01
In this paper we study a semilinear elliptic problem on a bounded domain in $\\R^2$ with large exponent in the nonlinear term. We consider positive solutions obtained by minimizing suitable functionals. We prove some asymtotic estimates which enable us to associate a "limit problem" to the initial one. Usong these estimates we prove some quantitative properties of the solution, namely characterization of level sets and nondegeneracy.
An Estimation Method for number of carrier frequency
Directory of Open Access Journals (Sweden)
Xiong Peng
2015-01-01
Full Text Available This paper proposes a method that utilizes AR model power spectrum estimation based on Burg algorithm to estimate the number of carrier frequency in single pulse. In the modern electronic and information warfare, the pulse signal form of radar is complex and changeable, among which single pulse with multi-carrier frequencies is the most typical one, such as the frequency shift keying (FSK signal, the frequency shift keying with linear frequency (FSK-LFM hybrid modulation signal and the frequency shift keying with bi-phase shift keying (FSK-BPSK hybrid modulation signal. In view of this kind of single pulse which has multi-carrier frequencies, this paper adopts a method which transforms the complex signal into AR model, then takes power spectrum based on Burg algorithm to show the effect. Experimental results show that the estimation method still can determine the number of carrier frequencies accurately even when the signal noise ratio (SNR is very low.
Hydrological model uncertainty due to spatial evapotranspiration estimation methods
Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub
2016-05-01
Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.
Settling characteristics of nursery pig manure and nutrient estimation by the hydrometer method.
Zhu, Jun; Ndegwa, Pius M; Zhang, Zhijian
2003-05-01
The hydrometer method to measure manure specific gravity and subsequently relate it to manure nutrient contents was examined in this study. It was found that this method might be improved in estimation accuracy if only manure from a single growth stage of pigs was used (e.g., nursery pig manure used here). The total solids (TS) content of the test manure was well correlated with the total nitrogen (TN) and total phosphorus (TP) concentrations in the manure, with highly significant correlation coefficients of 0.9944 and 0.9873, respectively. Also observed were good linear correlations between the TN and TP contents and the manure specific gravity (correlation coefficients: 0.9836 and 0.9843, respectively). These correlations were much better than those reported by past researchers, in which lumped data for pigs at different growing stages were used. It may therefore be inferred that developing different linear equations for pigs at different ages should improve the accuracy in manure nutrient estimation using a hydrometer. Also, the error of using the hydrometer method to estimate manure TN and TP was found to increase, from +/- 10% to +/- 50%, with the decrease in TN (from 700 ppm to 100 ppm) and TP (from 130 ppm to 30 ppm) concentrations in the manure. The estimation errors for TN and TP may be larger than 50% if the total solids content is below 0.5%. In addition, the rapid settling of solids has long been considered characteristic of swine manure; however, in this study, the solids settling property appeared to be quite poor for nursery pig manure in that no conspicuous settling occurred after the manure was left statically for 5 hours. This information has not been reported elsewhere in the literature and may need further research to verify.
Consumptive use of upland rice as estimated by different methods
International Nuclear Information System (INIS)
Chhabda, P.R.; Varade, S.B.
1985-01-01
The consumptive use of upland rice (Oryza sativa Linn.) grown during the wet season (kharif) as estimated by modified Penman, radiation, pan-evaporation and Hargreaves methods showed a variation from computed consumptive use estimated by the gravimetric method. The variability increased with an increase in the irrigation interval, and decreased with an increase in the level of N applied. The average variability was less in pan-evaporation method, which could reliably be used for estimating water requirement of upland rice if percolation losses are considered
Side-by-side ANFIS as a useful tool for estimating correlated thermophysical properties
Grieu, Stéphane; Faugeroux, Olivier; Traoré, Adama; Claudet, Bernard; Bodnar, Jean-Luc
2015-12-01
In the present paper, an artificial intelligence-based approach dealing with the estimation of correlated thermophysical properties is designed and evaluated. This new and "intelligent" approach makes use of photothermal responses obtained when homogeneous materials are subjected to a light flux. Commonly, gradient-based algorithms are used as parameter estimation techniques. Unfortunately, such algorithms show instabilities leading to non-convergence in case of correlated properties to be estimated from a rebuilt impulse response. So, the main objective of the present work was to simultaneously estimate both the thermal diffusivity and conductivity of homogeneous materials, from front-face or rear-face photothermal responses to pseudo random binary signals. To this end, we used side-by-side neuro-fuzzy systems (adaptive network-based fuzzy inference systems) trained with a hybrid algorithm. We focused on the impact on generalization of both the examples used during training and the fuzzification process. In addition, computation time was a key point to consider. That is why the developed algorithm is computationally tractable and allows both the thermal diffusivity and conductivity of homogeneous materials to be simultaneously estimated with very good accuracy (the generalization error ranges between 4.6% and 6.2%).
Zhang, Airong; Zhang, Song; Bian, Cuirong
2018-02-01
Cortical bone provides the main form of support in humans and other vertebrates against various forces. Thus, capturing its mechanical properties is important. In this study, the mechanical properties of cortical bone were investigated by using automated ball indentation and graphics processing at both the macroscopic and microstructural levels under dry conditions. First, all polished samples were photographed under a metallographic microscope, and the area ratio of the circumferential lamellae and osteons was calculated through the graphics processing method. Second, fully-computer-controlled automated ball indentation (ABI) tests were performed to explore the micro-mechanical properties of the cortical bone at room temperature and a constant indenter speed. The indentation defects were examined with a scanning electron microscope. Finally, the macroscopic mechanical properties of the cortical bone were estimated with the graphics processing method and mixture rule. Combining ABI and graphics processing proved to be an effective tool to obtaining the mechanical properties of the cortical bone, and the indenter size had a significant effect on the measurement. The methods presented in this paper provide an innovative approach to acquiring the macroscopic mechanical properties of cortical bone in a nondestructive manner. Copyright © 2017 Elsevier Ltd. All rights reserved.
Macro-architectured cellular materials: Properties, characteristic modes, and prediction methods
Ma, Zheng-Dong
2017-12-01
Macro-architectured cellular (MAC) material is defined as a class of engineered materials having configurable cells of relatively large (i.e., visible) size that can be architecturally designed to achieve various desired material properties. Two types of novel MAC materials, negative Poisson's ratio material and biomimetic tendon reinforced material, were introduced in this study. To estimate the effective material properties for structural analyses and to optimally design such materials, a set of suitable homogenization methods was developed that provided an effective means for the multiscale modeling of MAC materials. First, a strain-based homogenization method was developed using an approach that separated the strain field into a homogenized strain field and a strain variation field in the local cellular domain superposed on the homogenized strain field. The principle of virtual displacements for the relationship between the strain variation field and the homogenized strain field was then used to condense the strain variation field onto the homogenized strain field. The new method was then extended to a stress-based homogenization process based on the principle of virtual forces and further applied to address the discrete systems represented by the beam or frame structures of the aforementioned MAC materials. The characteristic modes and the stress recovery process used to predict the stress distribution inside the cellular domain and thus determine the material strengths and failures at the local level are also discussed.
Methods for estimating low-flow statistics for Massachusetts streams
Ries, Kernell G.; Friesz, Paul J.
2000-01-01
Methods and computer software are described in this report for determining flow duration, low-flow frequency statistics, and August median flows. These low-flow statistics can be estimated for unregulated streams in Massachusetts using different methods depending on whether the location of interest is at a streamgaging station, a low-flow partial-record station, or an ungaged site where no data are available. Low-flow statistics for streamgaging stations can be estimated using standard U.S. Geological Survey methods described in the report. The MOVE.1 mathematical method and a graphical correlation method can be used to estimate low-flow statistics for low-flow partial-record stations. The MOVE.1 method is recommended when the relation between measured flows at a partial-record station and daily mean flows at a nearby, hydrologically similar streamgaging station is linear, and the graphical method is recommended when the relation is curved. Equations are presented for computing the variance and equivalent years of record for estimates of low-flow statistics for low-flow partial-record stations when either a single or multiple index stations are used to determine the estimates. The drainage-area ratio method or regression equations can be used to estimate low-flow statistics for ungaged sites where no data are available. The drainage-area ratio method is generally as accurate as or more accurate than regression estimates when the drainage-area ratio for an ungaged site is between 0.3 and 1.5 times the drainage area of the index data-collection site. Regression equations were developed to estimate the natural, long-term 99-, 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, and 50-percent duration flows; the 7-day, 2-year and the 7-day, 10-year low flows; and the August median flow for ungaged sites in Massachusetts. Streamflow statistics and basin characteristics for 87 to 133 streamgaging stations and low-flow partial-record stations were used to develop the equations. The
Comparing Methods for Estimating Direct Costs of Adverse Drug Events.
Gyllensten, Hanna; Jönsson, Anna K; Hakkarainen, Katja M; Svensson, Staffan; Hägg, Staffan; Rehnberg, Clas
2017-12-01
To estimate how direct health care costs resulting from adverse drug events (ADEs) and cost distribution are affected by methodological decisions regarding identification of ADEs, assigning relevant resource use to ADEs, and estimating costs for the assigned resources. ADEs were identified from medical records and diagnostic codes for a random sample of 4970 Swedish adults during a 3-month study period in 2008 and were assessed for causality. Results were compared for five cost evaluation methods, including different methods for identifying ADEs, assigning resource use to ADEs, and for estimating costs for the assigned resources (resource use method, proportion of registered cost method, unit cost method, diagnostic code method, and main diagnosis method). Different levels of causality for ADEs and ADEs' contribution to health care resource use were considered. Using the five methods, the maximum estimated overall direct health care costs resulting from ADEs ranged from Sk10,000 (Sk = Swedish krona; ~€1,500 in 2016 values) using the diagnostic code method to more than Sk3,000,000 (~€414,000) using the unit cost method in our study population. The most conservative definitions for ADEs' contribution to health care resource use and the causality of ADEs resulted in average costs per patient ranging from Sk0 using the diagnostic code method to Sk4066 (~€500) using the unit cost method. The estimated costs resulting from ADEs varied considerably depending on the methodological choices. The results indicate that costs for ADEs need to be identified through medical record review and by using detailed unit cost data. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Discretization of Lévy semistationary processes with application to estimation
DEFF Research Database (Denmark)
Bennedsen, Mikkel; Lunde, Asger; Pakkanen, Mikko
Motivated by the construction of the Ito stochastic integral, we consider a step function method to discretize and simulate volatility modulated Lévy semistationary processes. Moreover, we assess the accuracy of the method with a particular focus on integrating kernels with a singularity...... at the origin. Using the simulation method, we study the finite sample properties of some recently developed estimators of realized volatility and associated parametric estimators for Brownian semistationary processes. Although the theoretical properties of these estimators have been established under high...
Phase difference estimation method based on data extension and Hilbert transform
International Nuclear Information System (INIS)
Shen, Yan-lin; Tu, Ya-qing; Chen, Lin-jun; Shen, Ting-ao
2015-01-01
To improve the precision and anti-interference performance of phase difference estimation for non-integer periods of sampling signals, a phase difference estimation method based on data extension and Hilbert transform is proposed. Estimated phase difference is obtained by means of data extension, Hilbert transform, cross-correlation, auto-correlation, and weighted phase average. Theoretical analysis shows that the proposed method suppresses the end effects of Hilbert transform effectively. The results of simulations and field experiments demonstrate that the proposed method improves the anti-interference performance of phase difference estimation and has better performance of phase difference estimation than the correlation, Hilbert transform, and data extension-based correlation methods, which contribute to improving the measurement precision of the Coriolis mass flowmeter. (paper)
Gray bootstrap method for estimating frequency-varying random vibration signals with small samples
Directory of Open Access Journals (Sweden)
Wang Yanqing
2014-04-01
Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.
Simple method for the estimation of glomerular filtration rate
Energy Technology Data Exchange (ETDEWEB)
Groth, T [Group for Biomedical Informatics, Uppsala Univ. Data Center, Uppsala (Sweden); Tengstroem, B [District General Hospital, Skoevde (Sweden)
1977-02-01
A simple method is presented for indirect estimation of the glomerular filtration rate from two venous blood samples, drawn after a single injection of a small dose of (/sup 125/I)sodium iothalamate (10 ..mu..Ci). The method does not require exact dosage, as the first sample, taken after a few minutes (t=5 min) after injection, is used to normilize the value of the second sample, which should be taken in between 2 to 4 h after injection. The glomerular filtration rate, as measured by standard insulin clearance, may then be predicted from the logarithm of the normalized value and linear regression formulas with a standard error of estimate of the order of 1 to 2 ml/min/1.73 m/sup 2/. The slope-intercept method for direct estimation of glomerular filtration rate is also evaluated and found to significantly underestimate standard insulin clearance. The normalized 'single-point' method is concluded to be superior to the slope-intercept method and more sophisticated methods using curve fitting technique, with regard to predictive force and clinical applicability.
Estimating lithospheric properties at Atla Regio, Venus
Phillips, Roger J.
1994-01-01
Magellan spehrical harmonic gravity and topography models are used to estimate lithospheric properties at Alta Regio, Venus, a proposed hotspot with dynamic support from mantle plume(s). Global spherical harmonic and local representations of the gravity field share common properties in the Atla region interms of their spectral behavior over a wavelength band from approximately 2100 to approximately 700 km. The estimated free-air admittance spectrum displays a rather featureless long-wavelength portion followed by a sharp rise at wavelengths shorter than about 1000 km. This sharp rise requires significant flexural support of short-wavelength structures. The Bouguer coherence also displays a sharp drop in this wavelength band, indicating a finite flexural rigidity of the lithosphere. A simple model for lithospheric loading from above and below is introduced (D. W. Forsyth, 1985) with four parameters: f, the ratio of bottom loading to top loading; z(sub m), crustal thickness; z(sub l) depth to bottom loading source; and T(sub e) elastic lithosphere thickness. A dual-mode compensation model is introduced in which the shorter wavelengths (lambda approximately less than 1000 km) might be explained best by a predominance of top loading by the large shield volcanoes Maat Mons, Ozza Mons, and Sapas Mons, and the longer wavelengths (lambda approximately greater than 1500 km) might be explained best by a deep depth of compensation, possibly representing bottom loading by a dynamic source. A Monte Carlo inversion technique is introduced to thoroughly search out the four-space of the model parameters and to examine parameter correlation in the solutions. Venus either is a considerabe deficient in heat sources relative to Earth, or the thermal lithosphere is overthickened in response to an earlier episode of significant heat loss from the planet.
Effect of CT image size and resolution on the accuracy of rock property estimates
Bazaikin, Y.; Gurevich, B.; Iglauer, S.; Khachkova, T.; Kolyukhin, D.; Lebedev, M.; Lisitsa, V.; Reshetova, G.
2017-05-01
In order to study the effect of the micro-CT scan resolution and size on the accuracy of upscaled digital rock property estimation of core samples Bentheimer sandstone images with the resolution varying from 0.9 μm to 24 μm are used. We statistically show that the correlation length of the pore-to-matrix distribution can be reliably determined for the images with the resolution finer than 9 voxels per correlation length and the representative volume for this property is about 153 correlation length. Similar resolution values for the statistically representative volume are also valid for the estimation of the total porosity, specific surface area, mean curvature, and topology of the pore space. Only the total porosity and the number of isolated pores are stably recovered, whereas geometry and the topological measures of the pore space are strongly affected by the resolution change. We also simulate fluid flow in the pore space and estimate permeability and tortuosity of the sample. The results demonstrate that the representative volume for the transport property calculation should be greater than 50 correlation lengths of pore-to-matrix distribution. On the other hand, permeability estimation based on the statistical analysis of equivalent realizations shows some weak influence of the resolution on the transport properties. The reason for this might be that the characteristic scale of the particular physical processes may affect the result stronger than the model (image) scale.
Nanosilicon properties, synthesis, applications, methods of analysis and control
Ischenko, Anatoly A; Aslalnov, Leonid A
2015-01-01
Nanosilicon: Properties, Synthesis, Applications, Methods of Analysis and Control examines the latest developments on the physics and chemistry of nanosilicon. The book focuses on methods for producing nanosilicon, its electronic and optical properties, research methods to characterize its spectral and structural properties, and its possible applications. The first part of the book covers the basic properties of semiconductors, including causes of the size dependence of the properties, structural and electronic properties, and physical characteristics of the various forms of silicon. It presents theoretical and experimental research results as well as examples of porous silicon and quantum dots. The second part discusses the synthesis of nanosilicon, modification of the surface of nanoparticles, and properties of the resulting particles. The authors give special attention to the photoluminescence of silicon nanoparticles. The third part describes methods used for studying and controlling the structure and pro...
A new method to determine magnetic properties of the unsaturated-magnetized rotor of a novel gyro
Energy Technology Data Exchange (ETDEWEB)
Li, Hai, E-mail: lihai7772006@126.com [MEMS Center, Harbin Institution of Technology, Harbin, 150001 (China); Liu, Xiaowei [MEMS Center, Harbin Institution of Technology, Harbin, 150001 (China); Key Laboratory of Micro-Systems and Micro-Structures Manufacturing, Ministry of Education, Harbin, 150001 (China); Dong, Changchun [School of Software, Harbin University of Science and Technology, Harbin, 150001 (China); Zhang, Haifeng [MEMS Center, Harbin Institution of Technology, Harbin, 150001 (China)
2016-06-01
A new method is proposed to determine magnetic properties of the unsaturated-magnetized, small and irregular shaped rotor of a novel gyro. The method is based on finite-element analysis and the measurements of the magnetic flux density distribution, determining magnetic parameters by comparing the magnetic flux intensity distribution differences between the modeling results under different parameters and the measured ones. Experiment on a N30 Grade NdFeB magnet shows that its residual magnetic flux density is 1.10±0.01 T, and coercive field strength is 801±3 kA/m, which are consistent with the given parameters of the material. The method was applied to determine the magnetic properties of the rotor of the gyro, and the magnetic properties acquired were used to predict the open-loop gyro precession frequency. The predicted precession frequency should be larger than 12.9 Hz, which is close to the experimental result 13.5 Hz. The result proves that the method is accurate in estimating the magnetic properties of the rotor of the gyro. - Highlights: • A new method to determine the magnetic properties of a gyro’s rotor is proposed. • The method is based on FEA and magnetic flux density distributions near magnets. • The result is determined by the distribution and values of all the measured points. • Using the result, the open-loop gyro precession frequency is precisely predicted.
A new method to determine magnetic properties of the unsaturated-magnetized rotor of a novel gyro
International Nuclear Information System (INIS)
Li, Hai; Liu, Xiaowei; Dong, Changchun; Zhang, Haifeng
2016-01-01
A new method is proposed to determine magnetic properties of the unsaturated-magnetized, small and irregular shaped rotor of a novel gyro. The method is based on finite-element analysis and the measurements of the magnetic flux density distribution, determining magnetic parameters by comparing the magnetic flux intensity distribution differences between the modeling results under different parameters and the measured ones. Experiment on a N30 Grade NdFeB magnet shows that its residual magnetic flux density is 1.10±0.01 T, and coercive field strength is 801±3 kA/m, which are consistent with the given parameters of the material. The method was applied to determine the magnetic properties of the rotor of the gyro, and the magnetic properties acquired were used to predict the open-loop gyro precession frequency. The predicted precession frequency should be larger than 12.9 Hz, which is close to the experimental result 13.5 Hz. The result proves that the method is accurate in estimating the magnetic properties of the rotor of the gyro. - Highlights: • A new method to determine the magnetic properties of a gyro’s rotor is proposed. • The method is based on FEA and magnetic flux density distributions near magnets. • The result is determined by the distribution and values of all the measured points. • Using the result, the open-loop gyro precession frequency is precisely predicted.
Comparison of methods for estimating premorbid intelligence
Bright, Peter; van der Linde, Ian
2018-01-01
To evaluate impact of neurological injury on cognitive performance it is typically necessary to derive a baseline (or ‘premorbid’) estimate of a patient’s general cognitive ability prior to the onset of impairment. In this paper, we consider a range of common methods for producing this estimate, including those based on current best performance, embedded ‘hold/no hold’ tests, demographic information, and word reading ability. Ninety-two neurologically healthy adult participants were assessed ...
A numerical integration-based yield estimation method for integrated circuits
International Nuclear Information System (INIS)
Liang Tao; Jia Xinzhang
2011-01-01
A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)
A numerical integration-based yield estimation method for integrated circuits
Energy Technology Data Exchange (ETDEWEB)
Liang Tao; Jia Xinzhang, E-mail: tliang@yahoo.cn [Key Laboratory of Ministry of Education for Wide Bandgap Semiconductor Materials and Devices, School of Microelectronics, Xidian University, Xi' an 710071 (China)
2011-04-15
A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)
Correction of Misclassifications Using a Proximity-Based Estimation Method
Directory of Open Access Journals (Sweden)
Shmulevich Ilya
2004-01-01
Full Text Available An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.
Stock price estimation using ensemble Kalman Filter square root method
Karya, D. F.; Katias, P.; Herlambang, T.
2018-04-01
Shares are securities as the possession or equity evidence of an individual or corporation over an enterprise, especially public companies whose activity is stock trading. Investment in stocks trading is most likely to be the option of investors as stocks trading offers attractive profits. In determining a choice of safe investment in the stocks, the investors require a way of assessing the stock prices to buy so as to help optimize their profits. An effective method of analysis which will reduce the risk the investors may bear is by predicting or estimating the stock price. Estimation is carried out as a problem sometimes can be solved by using previous information or data related or relevant to the problem. The contribution of this paper is that the estimates of stock prices in high, low, and close categorycan be utilized as investors’ consideration for decision making in investment. In this paper, stock price estimation was made by using the Ensemble Kalman Filter Square Root method (EnKF-SR) and Ensemble Kalman Filter method (EnKF). The simulation results showed that the resulted estimation by applying EnKF method was more accurate than that by the EnKF-SR, with an estimation error of about 0.2 % by EnKF and an estimation error of 2.6 % by EnKF-SR.
Glass Property Data and Models for Estimating High-Level Waste Glass Volume
Energy Technology Data Exchange (ETDEWEB)
Vienna, John D.; Fluegel, Alexander; Kim, Dong-Sang; Hrma, Pavel R.
2009-10-05
This report describes recent efforts to develop glass property models that can be used to help estimate the volume of high-level waste (HLW) glass that will result from vitrification of Hanford tank waste. The compositions of acceptable and processable HLW glasses need to be optimized to minimize the waste-form volume and, hence, to save cost. A database of properties and associated compositions for simulated waste glasses was collected for developing property-composition models. This database, although not comprehensive, represents a large fraction of data on waste-glass compositions and properties that were available at the time of this report. Glass property-composition models were fit to subsets of the database for several key glass properties. These models apply to a significantly broader composition space than those previously publised. These models should be considered for interim use in calculating properties of Hanford waste glasses.
Glass Property Data and Models for Estimating High-Level Waste Glass Volume
International Nuclear Information System (INIS)
Vienna, John D.; Fluegel, Alexander; Kim, Dong-Sang; Hrma, Pavel R.
2009-01-01
This report describes recent efforts to develop glass property models that can be used to help estimate the volume of high-level waste (HLW) glass that will result from vitrification of Hanford tank waste. The compositions of acceptable and processable HLW glasses need to be optimized to minimize the waste-form volume and, hence, to save cost. A database of properties and associated compositions for simulated waste glasses was collected for developing property-composition models. This database, although not comprehensive, represents a large fraction of data on waste-glass compositions and properties that were available at the time of this report. Glass property-composition models were fit to subsets of the database for several key glass properties. These models apply to a significantly broader composition space than those previously publised. These models should be considered for interim use in calculating properties of Hanford waste glasses.
Improvement of Accuracy for Background Noise Estimation Method Based on TPE-AE
Itai, Akitoshi; Yasukawa, Hiroshi
This paper proposes a method of a background noise estimation based on the tensor product expansion with a median and a Monte carlo simulation. We have shown that a tensor product expansion with absolute error method is effective to estimate a background noise, however, a background noise might not be estimated by using conventional method properly. In this paper, it is shown that the estimate accuracy can be improved by using proposed methods.
Quasi-Experiments and Hedonic Property Value Methods
Christopher F. Parmeter; Jaren C. Pope
2012-01-01
There has recently been a dramatic increase in the number of papers that have combined quasi-experimental methods with hedonic property models. This is largely due to the concern that cross-sectional hedonic methods may be severely biased by omitted variables. While the empirical literature has developed extensively, there has not been a consistent treatment of the theory and methods of combining hedonic property models with quasi-experiments. The purpose of this chapter is to fill this void....
Methods for risk estimation in nuclear energy
Energy Technology Data Exchange (ETDEWEB)
Gauvenet, A [CEA, 75 - Paris (France)
1979-01-01
The author presents methods for estimating the different risks related to nuclear energy: immediate or delayed risks, individual or collective risks, risks of accidents and long-term risks. These methods have attained a highly valid level of elaboration and their application to other industrial or human problems is currently under way, especially in English-speaking countries.
Sikora, Grzegorz; Teuerle, Marek; Wyłomańska, Agnieszka; Grebenkov, Denis
2017-08-01
The most common way of estimating the anomalous scaling exponent from single-particle trajectories consists of a linear fit of the dependence of the time-averaged mean-square displacement on the lag time at the log-log scale. We investigate the statistical properties of this estimator in the case of fractional Brownian motion (FBM). We determine the mean value, the variance, and the distribution of the estimator. Our theoretical results are confirmed by Monte Carlo simulations. In the limit of long trajectories, the estimator is shown to be asymptotically unbiased, consistent, and with vanishing variance. These properties ensure an accurate estimation of the scaling exponent even from a single (long enough) trajectory. As a consequence, we prove that the usual way to estimate the diffusion exponent of FBM is correct from the statistical point of view. Moreover, the knowledge of the estimator distribution is the first step toward new statistical tests of FBM and toward a more reliable interpretation of the experimental histograms of scaling exponents in microbiology.
Correlation between the estimated molecular weight and the immunological properties of 125I-TSH
International Nuclear Information System (INIS)
Quiroga, S.E.; Ciscato, V.A.; Barmasch, M.; Kurcbart, H.; Veira de Giacomini, S.; Altschuler, N.; Caro, R.A.
1976-09-01
Thyrotropic Stimulating Hormone (TSH) was radioiodinated by the Chloramine T method in order to be used in radioimmu-noassay procedures. It was purified by gel filtration and each fraction of the eluate was analyzed in order to determine which one had the most suitable behaviour for that use. The molecular weight of each fraction was estimated, as well as its immunological reactivity and its non-specific binding. The 125 I-TSH fraction with better properties was the closest to the molecular weight of the native hormone, which is found at the posterior shoulder of the main proteic peak of the elution pattern. (author) [es
Study on Comparison of Bidding and Pricing Behavior Distinction between Estimate Methods
Morimoto, Emi; Namerikawa, Susumu
The most characteristic trend on bidding and pricing behavior distinction in recent years is the increasing number of bidders just above the criteria for low-price bidding investigations. The contractor's markup is the difference between the bidding price and the execution price. Therefore, the contractor's markup is the difference between criteria for low-price bidding investigations price and the execution price in the public works bid in Japan. Virtually, bidder's strategies and behavior have been controlled by public engineer's budgets. Estimation and bid are inseparably linked in the Japanese public works procurement system. The trial of the unit price-type estimation method begins in 2004. On another front, accumulated estimation method is one of the general methods in public works. So, there are two types of standard estimation methods in Japan. In this study, we did a statistical analysis on the bid information of civil engineering works for the Ministry of Land, Infrastructure, and Transportation in 2008. It presents several issues that bidding and pricing behavior is related to an estimation method (several estimation methods) for public works bid in Japan. The two types of standard estimation methods produce different results that number of bidders (decide on bid-no bid strategy) and distribution of bid price (decide on mark-up strategy).The comparison on the distribution of bid prices showed that the percentage of the bid concentrated on the criteria for low-price bidding investigations have had a tendency to get higher in the large-sized public works by the unit price-type estimation method, comparing with the accumulated estimation method. On one hand, the number of bidders who bids for public works estimated unit-price tends to increase significantly Public works estimated unit-price is likely to have been one of the factors for the construction companies to decide if they participate in the biddings.
A different approach to estimate nonlinear regression model using numerical methods
Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.
2017-11-01
This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].
Spectral radiative property control method based on filling solution
International Nuclear Information System (INIS)
Jiao, Y.; Liu, L.H.; Hsu, P.-F.
2014-01-01
Controlling thermal radiation by tailoring spectral properties of microstructure is a promising method, can be applied in many industrial systems and have been widely researched recently. Among various property tailoring schemes, geometry design of microstructures is a commonly used method. However, the existing radiation property tailoring is limited by adjustability of processed microstructures. In other words, the spectral radiative properties of microscale structures are not possible to change after the gratings are fabricated. In this paper, we propose a method that adjusts the grating spectral properties by means of injecting filling solution, which could modify the thermal radiation in a fabricated microstructure. Therefore, this method overcomes the limitation mentioned above. Both mercury and water are adopted as the filling solution in this study. Aluminum and silver are selected as the grating materials to investigate the generality and limitation of this control method. The rigorous coupled-wave analysis is used to investigate the spectral radiative properties of these filling solution grating structures. A magnetic polaritons mechanism identification method is proposed based on LC circuit model principle. It is found that this control method could be used by different grating materials. Different filling solutions would enable the high absorption peak to move to longer or shorter wavelength band. The results show that the filling solution grating structures are promising for active control of spectral radiative properties. -- Highlights: • A filling solution grating structure is designed to adjust spectral radiative properties. • The mechanism of radiative property control is studied for engineering utilization. • Different grating materials are studied to find multi-functions for grating
Rapid cable tension estimation using dynamic and mechanical properties
Martínez-Castro, Rosana E.; Jang, Shinae; Christenson, Richard E.
2016-04-01
Main tension elements are critical to the overall stability of cable-supported bridges. A dependable and rapid determination of cable tension is desired to assess the state of a cable-supported bridge and evaluate its operability. A portable smart sensor setup is presented to reduce post-processing time and deployment complexity while reliably determining cable tension using dynamic characteristics extracted from spectral analysis. A self-recording accelerometer is coupled with a single-board microcomputer that communicates wirelessly with a remote host computer. The portable smart sensing device is designed such that additional algorithms, sensors and controlling devices for various monitoring applications can be installed and operated for additional structural assessment. The tension-estimating algorithms are based on taut string theory and expand to consider bending stiffness. The successful combination of cable properties allows the use of a cable's dynamic behavior to determine tension force. The tension-estimating algorithms are experimentally validated on a through-arch steel bridge subject to ambient vibration induced by passing traffic. The tension estimation is determined in well agreement with previously determined tension values for the structure.
Ore reserve estimation: a summary of principles and methods
International Nuclear Information System (INIS)
Marques, J.P.M.
1985-01-01
The mining industry has experienced substantial improvements with the increasing utilization of computerized and electronic devices throughout the last few years. In the ore reserve estimation field the main methods have undergone recent advances in order to improve their overall efficiency. This paper presents the three main groups of ore reserve estimation methods presently used worldwide: Conventional, Statistical and Geostatistical, and elaborates a detaited description and comparative analysis of each. The Conventional Methods are the oldest, less complex and most employed ones. The Geostatistical Methods are the most recent precise and more complex ones. The Statistical Methods are intermediate to the others in complexity, diffusion and chronological order. (D.J.M.) [pt
International Nuclear Information System (INIS)
Brandenburg, N.P.
1978-01-01
The first part of this thesis is concerned with parameter methods for estimating the standard enthalpy of formation, ΔH 0 sub(f), of inorganic compounds. In this type of method the estimate is a function of parameters, assigned to cation and anion, respectively. The usefulness of a new estimation method is illustrated in the case of uranyl sulphide. In the second part of this thesis crystallographic and thermochemical properties of uranyl salts of group VI elements are described. Crystal structures are given for β-UO 2 SO 4 , UO 2 SeO 3 , and α-UO 2 SeO 4 . Thermochemical measurements have been restricted to the determination of ΔH 0 sub(f)(UO 2 SO 3 ) and ΔH 0 sub(f)(UO 2 TeO 3 ) by means of isoperibol solution calorimetry. (Auth.)
Moustafa, Sabry Gad Al-Hak Mohammad
Molecular simulation (MS) methods (e.g. Monte Carlo (MC) and molecular dynamics (MD)) provide a reliable tool (especially at extreme conditions) to measure solid properties. However, measuring them accurately and efficiently (smallest uncertainty for a given time) using MS can be a big challenge especially with ab initio-type models. In addition, comparing with experimental results through extrapolating properties from finite size to the thermodynamic limit can be a critical obstacle. We first estimate the free energy (FE) of crystalline system of simple discontinuous potential, hard-spheres (HS), at its melting condition. Several approaches are explored to determine the most efficient route. The comparison study shows a considerable improvement in efficiency over the standard MS methods that are known for solid phases. In addition, we were able to accurately extrapolate to the thermodynamic limit using relatively small system sizes. Although the method is applied to HS model, it is readily extended to more complex hard-body potentials, such as hard tetrahedra. The harmonic approximation of the potential energy surface is usually an accurate model (especially at low temperature and large density) to describe many realistic solid phases. In addition, since the analysis is done numerically the method is relatively cheap. Here, we apply lattice dynamics (LD) techniques to get the FE of clathrate hydrates structures. Rigid-bonds model is assumed to describe water molecules; this, however, requires additional orientation degree-of-freedom in order to specify each molecule. However, we were able to efficiently avoid using those degrees of freedom through a mathematical transformation that only uses the atomic coordinates of water molecules. In addition, the proton-disorder nature of hydrate water networks adds extra complexity to the problem, especially when extrapolating to the thermodynamic limit is needed. The finite-size effects of the proton disorder contribution is
Harbert, Robert S; Nixon, Kevin C
2015-08-01
• Plant distributions have long been understood to be correlated with the environmental conditions to which species are adapted. Climate is one of the major components driving species distributions. Therefore, it is expected that the plants coexisting in a community are reflective of the local environment, particularly climate.• Presented here is a method for the estimation of climate from local plant species coexistence data. The method, Climate Reconstruction Analysis using Coexistence Likelihood Estimation (CRACLE), is a likelihood-based method that employs specimen collection data at a global scale for the inference of species climate tolerance. CRACLE calculates the maximum joint likelihood of coexistence given individual species climate tolerance characterization to estimate the expected climate.• Plant distribution data for more than 4000 species were used to show that this method accurately infers expected climate profiles for 165 sites with diverse climatic conditions. Estimates differ from the WorldClim global climate model by less than 1.5°C on average for mean annual temperature and less than ∼250 mm for mean annual precipitation. This is a significant improvement upon other plant-based climate-proxy methods.• CRACLE validates long hypothesized interactions between climate and local associations of plant species. Furthermore, CRACLE successfully estimates climate that is consistent with the widely used WorldClim model and therefore may be applied to the quantitative estimation of paleoclimate in future studies. © 2015 Botanical Society of America, Inc.
A New Method for Estimation of Velocity Vectors
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt; Munk, Peter
1998-01-01
The paper describes a new method for determining the velocity vector of a remotely sensed object using either sound or electromagnetic radiation. The movement of the object is determined from a field with spatial oscillations in both the axial direction of the transducer and in one or two...... directions transverse to the axial direction. By using a number of pulse emissions, the inter-pulse movement can be estimated and the velocity found from the estimated movement and the time between pulses. The method is based on the principle of using transverse spatial modulation for making the received...
Comparison of methods used for estimating pharmacist counseling behaviors.
Schommer, J C; Sullivan, D L; Wiederholt, J B
1994-01-01
To compare the rates reported for provision of types of information conveyed by pharmacists among studies for which different methods of estimation were used and different dispensing situations were studied. Empiric studies conducted in the US, reported from 1982 through 1992, were selected from International Pharmaceutical Abstracts, MEDLINE, and noncomputerized sources. Empiric studies were selected for review if they reported the provision of at least three types of counseling information. Four components of methods used for estimating pharmacist counseling behaviors were extracted and summarized in a table: (1) sample type and area, (2) sampling unit, (3) sample size, and (4) data collection method. In addition, situations that were investigated in each study were compiled. Twelve studies met our inclusion criteria. Patients were interviewed via telephone in four studies and were surveyed via mail in two studies. Pharmacists were interviewed via telephone in one study and surveyed via mail in two studies. For three studies, researchers visited pharmacy sites for data collection using the shopper method or observation method. Studies with similar methods and situations provided similar results. Data collected by using patient surveys, pharmacist surveys, and observation methods can provide useful estimations of pharmacist counseling behaviors if researchers measure counseling for specific, well-defined dispensing situations.
A simple method to estimate interwell autocorrelation
Energy Technology Data Exchange (ETDEWEB)
Pizarro, J.O.S.; Lake, L.W. [Univ. of Texas, Austin, TX (United States)
1997-08-01
The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.
Franciosi, Patrick; Spagnuolo, Mario; Salman, Oguz Umut
2018-04-01
Composites comprising included phases in a continuous matrix constitute a huge class of meta-materials, whose effective properties, whether they be mechanical, physical or coupled, can be selectively optimized by using appropriate phase arrangements and architectures. An important subclass is represented by "network-reinforced matrices," say those materials in which one or more of the embedded phases are co-continuous with the matrix in one or more directions. In this article, we present a method to study effective properties of simple such structures from which more complex ones can be accessible. Effective properties are shown, in the framework of linear elasticity, estimable by using the global mean Green operator for the entire embedded fiber network which is by definition through sample spanning. This network operator is obtained from one of infinite planar alignments of infinite fibers, which the network can be seen as an interpenetrated set of, with the fiber interactions being fully accounted for in the alignments. The mean operator of such alignments is given in exact closed form for isotropic elastic-like or dielectric-like matrices. We first exemplify how these operators relevantly provide, from classic homogenization frameworks, effective properties in the case of 1D fiber bundles embedded in an isotropic elastic-like medium. It is also shown that using infinite patterns with fully interacting elements over their whole influence range at any element concentration suppresses the dilute approximation limit of these frameworks. We finally present a construction method for a global operator of fiber networks described as interpenetrated such bundles.
Estimation and Properties of a Time-Varying GQARCH(1,1-M Model
Directory of Open Access Journals (Sweden)
Sofia Anyfantaki
2011-01-01
analysis of these models computationally infeasible. This paper outlines the issues and suggests to employ a Markov chain Monte Carlo algorithm which allows the calculation of a classical estimator via the simulated EM algorithm or a simulated Bayesian solution in only ( computational operations, where is the sample size. Furthermore, the theoretical dynamic properties of a time-varying GQARCH(1,1-M are derived. We discuss them and apply the suggested Bayesian estimation to three major stock markets.
Estimation of soil-soil solution distribution coefficient of radiostrontium using soil properties.
Ishikawa, Nao K; Uchida, Shigeo; Tagami, Keiko
2009-02-01
We propose a new approach for estimation of soil-soil solution distribution coefficient (K(d)) of radiostrontium using some selected soil properties. We used 142 Japanese agricultural soil samples (35 Andosol, 25 Cambisol, 77 Fluvisol, and 5 others) for which Sr-K(d) values had been determined by a batch sorption test and listed in our database. Spearman's rank correlation test was carried out to investigate correlations between Sr-K(d) values and soil properties. Electrical conductivity and water soluble Ca had good correlations with Sr-K(d) values for all soil groups. Then, we found a high correlation between the ratio of exchangeable Ca to Ca concentration in water soluble fraction and Sr-K(d) values with correlation coefficient R=0.72. This pointed us toward a relatively easy way to estimate Sr-K(d) values.
THE METHODS FOR ESTIMATING REGIONAL PROFESSIONAL MOBILE RADIO MARKET POTENTIAL
Directory of Open Access Journals (Sweden)
Y.À. Korobeynikov
2008-12-01
Full Text Available The paper represents the author’s methods of estimating regional professional mobile radio market potential, that belongs to high-tech b2b markets. These methods take into consideration such market peculiarities as great range and complexity of products, technological constraints and infrastructure development for the technological systems operation. The paper gives an estimation of professional mobile radio potential in Perm region. This estimation is already used by one of the systems integrator for its strategy development.
Comparative study of the geostatistical ore reserve estimation method over the conventional methods
International Nuclear Information System (INIS)
Kim, Y.C.; Knudsen, H.P.
1975-01-01
Part I contains a comprehensive treatment of the comparative study of the geostatistical ore reserve estimation method over the conventional methods. The conventional methods chosen for comparison were: (a) the polygon method, (b) the inverse of the distance squared method, and (c) a method similar to (b) but allowing different weights in different directions. Briefly, the overall result from this comparative study is in favor of the use of geostatistics in most cases because the method has lived up to its theoretical claims. A good exposition on the theory of geostatistics, the adopted study procedures, conclusions and recommended future research are given in Part I. Part II of this report contains the results of the second and the third study objectives, which are to assess the potential benefits that can be derived by the introduction of the geostatistical method to the current state-of-the-art in uranium reserve estimation method and to be instrumental in generating the acceptance of the new method by practitioners through illustrative examples, assuming its superiority and practicality. These are given in the form of illustrative examples on the use of geostatistics and the accompanying computer program user's guide
Estimating misclassification error: a closer look at cross-validation based methods
Directory of Open Access Journals (Sweden)
Ounpraseuth Songthip
2012-11-01
Full Text Available Abstract Background To estimate a classifier’s error in predicting future observations, bootstrap methods have been proposed as reduced-variation alternatives to traditional cross-validation (CV methods based on sampling without replacement. Monte Carlo (MC simulation studies aimed at estimating the true misclassification error conditional on the training set are commonly used to compare CV methods. We conducted an MC simulation study to compare a new method of bootstrap CV (BCV to k-fold CV for estimating clasification error. Findings For the low-dimensional conditions simulated, the modest positive bias of k-fold CV contrasted sharply with the substantial negative bias of the new BCV method. This behavior was corroborated using a real-world dataset of prognostic gene-expression profiles in breast cancer patients. Our simulation results demonstrate some extreme characteristics of variance and bias that can occur due to a fault in the design of CV exercises aimed at estimating the true conditional error of a classifier, and that appear not to have been fully appreciated in previous studies. Although CV is a sound practice for estimating a classifier’s generalization error, using CV to estimate the fixed misclassification error of a trained classifier conditional on the training set is problematic. While MC simulation of this estimation exercise can correctly represent the average bias of a classifier, it will overstate the between-run variance of the bias. Conclusions We recommend k-fold CV over the new BCV method for estimating a classifier’s generalization error. The extreme negative bias of BCV is too high a price to pay for its reduced variance.
Maximum Likelihood-Based Methods for Target Velocity Estimation with Distributed MIMO Radar
Directory of Open Access Journals (Sweden)
Zhenxin Cao
2018-02-01
Full Text Available The estimation problem for target velocity is addressed in this in the scenario with a distributed multi-input multi-out (MIMO radar system. A maximum likelihood (ML-based estimation method is derived with the knowledge of target position. Then, in the scenario without the knowledge of target position, an iterative method is proposed to estimate the target velocity by updating the position information iteratively. Moreover, the Carmér-Rao Lower Bounds (CRLBs for both scenarios are derived, and the performance degradation of velocity estimation without the position information is also expressed. Simulation results show that the proposed estimation methods can approach the CRLBs, and the velocity estimation performance can be further improved by increasing either the number of radar antennas or the information accuracy of the target position. Furthermore, compared with the existing methods, a better estimation performance can be achieved.
International Nuclear Information System (INIS)
Krikorian, O.H.
1984-12-01
In this study we develop correlation methods based on Knoop microhardness and melting points for estimating tensile strength, Young's modulus, and Poisson's ratio for Li 2 O as a function of grain size, porosity, and temperature. We develop generalized expressions for extrapolating the existing data on thermal conductivity and thermal expansivity. These derived thermophysical data are combined to predict thermal stress factors for Li 2 O. Based on the available vapor pressure data on Li 2 O and empirical correlations for the liquid and vapor equation of state of Li 2 O, we also make estimates of the critical properties of Li 2 O and obtain a critical temperature of approximately 6800 +- 800 0 K
Driessen, J.J.A.G.; Lin, T.C.; Phalippou, L.
2012-01-01
We develop a new methodology to estimate abnormal performance and risk exposure of nontraded assets from cash flows. Our methodology extends the standard internal rate of return approach to a dynamic setting. The small-sample properties are validated using a simulation study. We apply the method to
Emoto, K.; Saito, T.; Shiomi, K.
2017-12-01
Short-period (2 s) seismograms. We found that the energy of the coda of long-period seismograms shows a spatially flat distribution. This phenomenon is well known in short-period seismograms and results from the scattering by small-scale heterogeneities. We estimate the statistical parameters that characterize the small-scale random heterogeneity by modelling the spatiotemporal energy distribution of long-period seismograms. We analyse three moderate-size earthquakes that occurred in southwest Japan. We calculate the spatial distribution of the energy density recorded by a dense seismograph network in Japan at the period bands of 8-16 s, 4-8 s and 2-4 s and model them by using 3-D finite difference (FD) simulations. Compared to conventional methods based on statistical theories, we can calculate more realistic synthetics by using the FD simulation. It is not necessary to assume a uniform background velocity, body or surface waves and scattering properties considered in general scattering theories. By taking the ratio of the energy of the coda area to that of the entire area, we can separately estimate the scattering and the intrinsic absorption effects. Our result reveals the spectrum of the random inhomogeneity in a wide wavenumber range including the intensity around the corner wavenumber as P(m) = 8πε2a3/(1 + a2m2)2, where ε = 0.05 and a = 3.1 km, even though past studies analysing higher-frequency records could not detect the corner. Finally, we estimate the intrinsic attenuation by modelling the decay rate of the energy. The method proposed in this study is suitable for quantifying the statistical properties of long-wavelength subsurface random inhomogeneity, which leads the way to characterizing a wider wavenumber range of spectra, including the corner wavenumber.
Finite Element Method for Analysis of Material Properties
DEFF Research Database (Denmark)
Rauhe, Jens Christian
and the finite element method. The material microstructure of the heterogeneous material is non-destructively determined using X-ray microtomography. A software program has been generated which uses the X-ray tomographic data as an input for the mesh generation of the material microstructure. To obtain a proper...... which are used for the determination of the effective properties of the heterogeneous material. Generally, the properties determined using the finite element method coupled with X-ray microtomography are in good agreement with both experimentally determined properties and properties determined using......The use of cellular and composite materials have in recent years become more and more common in all kinds of structural components and accurate knowledge of the effective properties is therefore essential. In this wok the effective properties are determined using the real material microstructure...
Benchmarking Foot Trajectory Estimation Methods for Mobile Gait Analysis
Directory of Open Access Journals (Sweden)
Julius Hannink
2017-08-01
Full Text Available Mobile gait analysis systems based on inertial sensing on the shoe are applied in a wide range of applications. Especially for medical applications, they can give new insights into motor impairment in, e.g., neurodegenerative disease and help objectify patient assessment. One key component in these systems is the reconstruction of the foot trajectories from inertial data. In literature, various methods for this task have been proposed. However, performance is evaluated on a variety of datasets due to the lack of large, generally accepted benchmark datasets. This hinders a fair comparison of methods. In this work, we implement three orientation estimation and three double integration schemes for use in a foot trajectory estimation pipeline. All methods are drawn from literature and evaluated against a marker-based motion capture reference. We provide a fair comparison on the same dataset consisting of 735 strides from 16 healthy subjects. As a result, the implemented methods are ranked and we identify the most suitable processing pipeline for foot trajectory estimation in the context of mobile gait analysis.
Statistical methods of parameter estimation for deterministically chaotic time series
Pisarenko, V. F.; Sornette, D.
2004-03-01
We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A “segmentation fitting” maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x1 considered as an additional unknown parameter. The segmentation fitting method, called “piece-wise” ML, is similar in spirit but simpler and has smaller bias than the “multiple shooting” previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).
A comparison study of size-specific dose estimate calculation methods
Energy Technology Data Exchange (ETDEWEB)
Parikh, Roshni A. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); University of Michigan Health System, Department of Radiology, Ann Arbor, MI (United States); Wien, Michael A.; Jordan, David W.; Ciancibello, Leslie; Berlin, Sheila C. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); Novak, Ronald D. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); Rebecca D. Considine Research Institute, Children' s Hospital Medical Center of Akron, Center for Mitochondrial Medicine Research, Akron, OH (United States); Klahr, Paul [CT Clinical Science, Philips Healthcare, Highland Heights, OH (United States); Soriano, Stephanie [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); University of Washington, Department of Radiology, Seattle, WA (United States)
2018-01-15
The size-specific dose estimate (SSDE) has emerged as an improved metric for use by medical physicists and radiologists for estimating individual patient dose. Several methods of calculating SSDE have been described, ranging from patient thickness or attenuation-based (automated and manual) measurements to weight-based techniques. To compare the accuracy of thickness vs. weight measurement of body size to allow for the calculation of the size-specific dose estimate (SSDE) in pediatric body CT. We retrospectively identified 109 pediatric body CT examinations for SSDE calculation. We examined two automated methods measuring a series of level-specific diameters of the patient's body: method A used the effective diameter and method B used the water-equivalent diameter. Two manual methods measured patient diameter at two predetermined levels: the superior endplate of L2, where body width is typically most thin, and the superior femoral head or iliac crest (for scans that did not include the pelvis), where body width is typically most thick; method C averaged lateral measurements at these two levels from the CT projection scan, and method D averaged lateral and anteroposterior measurements at the same two levels from the axial CT images. Finally, we used body weight to characterize patient size, method E, and compared this with the various other measurement methods. Methods were compared across the entire population as well as by subgroup based on body width. Concordance correlation (ρ{sub c}) between each of the SSDE calculation methods (methods A-E) was greater than 0.92 across the entire population, although the range was wider when analyzed by subgroup (0.42-0.99). When we compared each SSDE measurement method with CTDI{sub vol,} there was poor correlation, ρ{sub c}<0.77, with percentage differences between 20.8% and 51.0%. Automated computer algorithms are accurate and efficient in the calculation of SSDE. Manual methods based on patient thickness provide
Kärhä, Petri; Vaskuri, Anna; Mäntynen, Henrik; Mikkonen, Nikke; Ikonen, Erkki
2017-08-01
Spectral irradiance data are often used to calculate colorimetric properties, such as color coordinates and color temperatures of light sources by integration. The spectral data may contain unknown correlations that should be accounted for in the uncertainty estimation. We propose a new method for estimating uncertainties in such cases. The method goes through all possible scenarios of deviations using Monte Carlo analysis. Varying spectral error functions are produced by combining spectral base functions, and the distorted spectra are used to calculate the colorimetric quantities. Standard deviations of the colorimetric quantities at different scenarios give uncertainties assuming no correlations, uncertainties assuming full correlation, and uncertainties for an unfavorable case of unknown correlations, which turn out to be a significant source of uncertainty. With 1% standard uncertainty in spectral irradiance, the expanded uncertainty of the correlated color temperature of a source corresponding to the CIE Standard Illuminant A may reach as high as 37.2 K in unfavorable conditions, when calculations assuming full correlation give zero uncertainty, and calculations assuming no correlations yield the expanded uncertainties of 5.6 K and 12.1 K, with wavelength steps of 1 nm and 5 nm used in spectral integrations, respectively. We also show that there is an absolute limit of 60.2 K in the error of the correlated color temperature for Standard Illuminant A when assuming 1% standard uncertainty in the spectral irradiance. A comparison of our uncorrelated uncertainties with those obtained using analytical methods by other research groups shows good agreement. We re-estimated the uncertainties for the colorimetric properties of our 1 kW photometric standard lamps using the new method. The revised uncertainty of color temperature is a factor of 2.5 higher than the uncertainty assuming no correlations.
On the estimation method of compressed air consumption during pneumatic caisson sinking
平川, 修治; ヒラカワ, シュウジ; Shuji, HIRAKAWA
1990-01-01
There are several methods in estimation of compressed air consumption during pneumatic caisson sinking. It is re uired in the estimation of compressed air consumption by the methods under the same conditions. In this paper, it is proposed the methods which is able to estimate accurately the compressed air consumption during pnbumatic caissons sinking at this moment.
International Nuclear Information System (INIS)
Quoc-Thang Vo
2013-01-01
This work is focused on a matrix/inclusion metal composite. A simple method is proposed to evaluate the elastic properties of one phase while the properties of the other phase are assumed to be known. The method is based on both an inverse homogenization scheme and mechanical field's measurements by 2D digital image correlation. The originality of the approach rests on the scale studied, i.e. the microstructure scale of material: the characteristic size of the inclusions is about few tens of microns. The evaluation is performed on standard uniaxial tensile tests associated with a long-distance microscope. It allows observation of the surface of a specimen on the microstructure scale during the mechanical stress. First, the accuracy of the method is estimated on 'perfect' mechanical fields coming from numerical simulations for four microstructures: elastic or porous single inclusions having either spherical or cylindrical shape. Second, this accuracy is estimated on real mechanical field for two simple microstructures: an elasto-plastic metallic matrix containing a single cylindrical micro void or four cylindrical micro voids arranged in a square pattern. Third, the method is used to evaluate elastic properties of αZr inclusions with arbitrary shape in an oxidized Zircaloy-4 sample of the fuel cladding of a pressurized water reactor after an accident loss of coolant accident (LOCA). In all this study, the phases are assumed to have isotropic properties. (author) [fr
Optical Method for Estimating the Chlorophyll Contents in Plant Leaves.
Pérez-Patricio, Madaín; Camas-Anzueto, Jorge Luis; Sanchez-Alegría, Avisaí; Aguilar-González, Abiel; Gutiérrez-Miceli, Federico; Escobar-Gómez, Elías; Voisin, Yvon; Rios-Rojas, Carlos; Grajales-Coutiño, Ruben
2018-02-22
This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance), a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD) meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica , Canavalia ensiforme , and Lycopersicon esculentum . Experimental results showed that-in terms of accuracy and processing speed-the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica , where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.
Optical Method for Estimating the Chlorophyll Contents in Plant Leaves
Directory of Open Access Journals (Sweden)
Madaín Pérez-Patricio
2018-02-01
Full Text Available This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance, a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica, Canavalia ensiforme, and Lycopersicon esculentum. Experimental results showed that—in terms of accuracy and processing speed—the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica, where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.
Joint Spatio-Temporal Filtering Methods for DOA and Fundamental Frequency Estimation
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Benesty, Jacob
2015-01-01
some attention in the community and is quite promising for several applications. The proposed methods are based on optimal, adaptive filters that leave the desired signal, having a certain DOA and fundamental frequency, undistorted and suppress everything else. The filtering methods simultaneously...... operate in space and time, whereby it is possible resolve cases that are otherwise problematic for pitch estimators or DOA estimators based on beamforming. Several special cases and improvements are considered, including a method for estimating the covariance matrix based on the recently proposed...
Merritt, Michael L.
2004-01-01
Aquifers are subjected to mechanical stresses from natural, non-anthropogenic, processes such as pressure loading or mechanical forcing of the aquifer by ocean tides, earth tides, and pressure fluctuations in the atmosphere. The resulting head fluctuations are evident even in deep confined aquifers. The present study was conducted for the purpose of reviewing the research that has been done on the use of these phenomena for estimating the values of aquifer properties, and determining which of the analytical techniques might be useful for estimating hydraulic properties in the dissolved-carbonate hydrologic environment of southern Florida. Fifteen techniques are discussed in this report, of which four were applied.An analytical solution for head oscillations in a well near enough to the ocean to be influenced by ocean tides was applied to data from monitor zones in a well near Naples, Florida. The solution assumes a completely non-leaky confining unit of infinite extent. Resulting values of transmissivity are in general agreement with the results of aquifer performance tests performed by the South Florida Water Management District. There seems to be an inconsistency between results of the amplitude ratio analysis and independent estimates of loading efficiency. A more general analytical solution that takes leakage through the confining layer into account yielded estimates that were lower than those obtained using the non-leaky method, and closer to the South Florida Water Management District estimates. A numerical model with a cross-sectional grid design was applied to explore additional aspects of the problem.A relation between specific storage and the head oscillation observed in a well provided estimates of specific storage that were considered reasonable. Porosity estimates based on the specific storage estimates were consistent with values obtained from measurements on core samples. Methods are described for determining aquifer diffusivity by comparing the time
Accurate Lithium-ion battery parameter estimation with continuous-time system identification methods
International Nuclear Information System (INIS)
Xia, Bing; Zhao, Xin; Callafon, Raymond de; Garnier, Hugues; Nguyen, Truong; Mi, Chris
2016-01-01
Highlights: • Continuous-time system identification is applied in Lithium-ion battery modeling. • Continuous-time and discrete-time identification methods are compared in detail. • The instrumental variable method is employed to further improve the estimation. • Simulations and experiments validate the advantages of continuous-time methods. - Abstract: The modeling of Lithium-ion batteries usually utilizes discrete-time system identification methods to estimate parameters of discrete models. However, in real applications, there is a fundamental limitation of the discrete-time methods in dealing with sensitivity when the system is stiff and the storage resolutions are limited. To overcome this problem, this paper adopts direct continuous-time system identification methods to estimate the parameters of equivalent circuit models for Lithium-ion batteries. Compared with discrete-time system identification methods, the continuous-time system identification methods provide more accurate estimates to both fast and slow dynamics in battery systems and are less sensitive to disturbances. A case of a 2"n"d-order equivalent circuit model is studied which shows that the continuous-time estimates are more robust to high sampling rates, measurement noises and rounding errors. In addition, the estimation by the conventional continuous-time least squares method is further improved in the case of noisy output measurement by introducing the instrumental variable method. Simulation and experiment results validate the analysis and demonstrate the advantages of the continuous-time system identification methods in battery applications.
Adaptive Methods for Permeability Estimation and Smart Well Management
Energy Technology Data Exchange (ETDEWEB)
Lien, Martha Oekland
2005-04-01
The main focus of this thesis is on adaptive regularization methods. We consider two different applications, the inverse problem of absolute permeability estimation and the optimal control problem of estimating smart well management. Reliable estimates of absolute permeability are crucial in order to develop a mathematical description of an oil reservoir. Due to the nature of most oil reservoirs, mainly indirect measurements are available. In this work, dynamic production data from wells are considered. More specifically, we have investigated into the resolution power of pressure data for permeability estimation. The inversion of production data into permeability estimates constitutes a severely ill-posed problem. Hence, regularization techniques are required. In this work, deterministic regularization based on adaptive zonation is considered, i.e. a solution approach with adaptive multiscale estimation in conjunction with level set estimation is developed for coarse scale permeability estimation. A good mathematical reservoir model is a valuable tool for future production planning. Recent developments within well technology have given us smart wells, which yield increased flexibility in the reservoir management. In this work, we investigate into the problem of finding the optimal smart well management by means of hierarchical regularization techniques based on multiscale parameterization and refinement indicators. The thesis is divided into two main parts, where Part I gives a theoretical background for a collection of research papers that has been written by the candidate in collaboration with others. These constitutes the most important part of the thesis, and are presented in Part II. A brief outline of the thesis follows below. Numerical aspects concerning calculations of derivatives will also be discussed. Based on the introduction to regularization given in Chapter 2, methods for multiscale zonation, i.e. adaptive multiscale estimation and refinement
Estimating the biophysical properties of neurons with intracellular calcium dynamics.
Ye, Jingxin; Rozdeba, Paul J; Morone, Uriel I; Daou, Arij; Abarbanel, Henry D I
2014-06-01
We investigate the dynamics of a conductance-based neuron model coupled to a model of intracellular calcium uptake and release by the endoplasmic reticulum. The intracellular calcium dynamics occur on a time scale that is orders of magnitude slower than voltage spiking behavior. Coupling these mechanisms sets the stage for the appearance of chaotic dynamics, which we observe within certain ranges of model parameter values. We then explore the question of whether one can, using observed voltage data alone, estimate the states and parameters of the voltage plus calcium (V+Ca) dynamics model. We find the answer is negative. Indeed, we show that voltage plus another observed quantity must be known to allow the estimation to be accurate. We show that observing both the voltage time course V(t) and the intracellular Ca time course will permit accurate estimation, and from the estimated model state, accurate prediction after observations are completed. This sets the stage for how one will be able to use a more detailed model of V+Ca dynamics in neuron activity in the analysis of experimental data on individual neurons as well as functional networks in which the nodes (neurons) have these biophysical properties.
Feng, Tao; Wang, Jizhe; Tsui, Benjamin M W
2018-04-01
The goal of this study was to develop and evaluate four post-reconstruction respiratory and cardiac (R&C) motion vector field (MVF) estimation methods for cardiac 4D PET data. In Method 1, the dual R&C motions were estimated directly from the dual R&C gated images. In Method 2, respiratory motion (RM) and cardiac motion (CM) were separately estimated from the respiratory gated only and cardiac gated only images. The effects of RM on CM estimation were modeled in Method 3 by applying an image-based RM correction on the cardiac gated images before CM estimation, the effects of CM on RM estimation were neglected. Method 4 iteratively models the mutual effects of RM and CM during dual R&C motion estimations. Realistic simulation data were generated for quantitative evaluation of four methods. Almost noise-free PET projection data were generated from the 4D XCAT phantom with realistic R&C MVF using Monte Carlo simulation. Poisson noise was added to the scaled projection data to generate additional datasets of two more different noise levels. All the projection data were reconstructed using a 4D image reconstruction method to obtain dual R&C gated images. The four dual R&C MVF estimation methods were applied to the dual R&C gated images and the accuracy of motion estimation was quantitatively evaluated using the root mean square error (RMSE) of the estimated MVFs. Results show that among the four estimation methods, Methods 2 performed the worst for noise-free case while Method 1 performed the worst for noisy cases in terms of quantitative accuracy of the estimated MVF. Methods 4 and 3 showed comparable results and achieved RMSE lower by up to 35% than that in Method 1 for noisy cases. In conclusion, we have developed and evaluated 4 different post-reconstruction R&C MVF estimation methods for use in 4D PET imaging. Comparison of the performance of four methods on simulated data indicates separate R&C estimation with modeling of RM before CM estimation (Method 3) to be
Geometric estimation method for x-ray digital intraoral tomosynthesis
Li, Liang; Yang, Yao; Chen, Zhiqiang
2016-06-01
It is essential for accurate image reconstruction to obtain a set of parameters that describes the x-ray scanning geometry. A geometric estimation method is presented for x-ray digital intraoral tomosynthesis (DIT) in which the detector remains stationary while the x-ray source rotates. The main idea is to estimate the three-dimensional (3-D) coordinates of each shot position using at least two small opaque balls adhering to the detector surface as the positioning markers. From the radiographs containing these balls, the position of each x-ray focal spot can be calculated independently relative to the detector center no matter what kind of scanning trajectory is used. A 3-D phantom which roughly simulates DIT was designed to evaluate the performance of this method both quantitatively and qualitatively in the sense of mean square error and structural similarity. Results are also presented for real data acquired with a DIT experimental system. These results prove the validity of this geometric estimation method.
Le Vu, Stéphane; Ratmann, Oliver; Delpech, Valerie; Brown, Alison E; Gill, O Noel; Tostevin, Anna; Fraser, Christophe; Volz, Erik M
2018-06-01
Phylogenetic clustering of HIV sequences from a random sample of patients can reveal epidemiological transmission patterns, but interpretation is hampered by limited theoretical support and statistical properties of clustering analysis remain poorly understood. Alternatively, source attribution methods allow fitting of HIV transmission models and thereby quantify aspects of disease transmission. A simulation study was conducted to assess error rates of clustering methods for detecting transmission risk factors. We modeled HIV epidemics among men having sex with men and generated phylogenies comparable to those that can be obtained from HIV surveillance data in the UK. Clustering and source attribution approaches were applied to evaluate their ability to identify patient attributes as transmission risk factors. We find that commonly used methods show a misleading association between cluster size or odds of clustering and covariates that are correlated with time since infection, regardless of their influence on transmission. Clustering methods usually have higher error rates and lower sensitivity than source attribution method for identifying transmission risk factors. But neither methods provide robust estimates of transmission risk ratios. Source attribution method can alleviate drawbacks from phylogenetic clustering but formal population genetic modeling may be required to estimate quantitative transmission risk factors. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Ran, Bin; Song, Li; Zhang, Jian; Cheng, Yang; Tan, Huachun
2016-01-01
Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%.
Directory of Open Access Journals (Sweden)
Bin Ran
Full Text Available Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%.
Vegetation index methods for estimating evapotranspiration by remote sensing
Glenn, Edward P.; Nagler, Pamela L.; Huete, Alfredo R.
2010-01-01
Evapotranspiration (ET) is the largest term after precipitation in terrestrial water budgets. Accurate estimates of ET are needed for numerous agricultural and natural resource management tasks and to project changes in hydrological cycles due to potential climate change. We explore recent methods that combine vegetation indices (VI) from satellites with ground measurements of actual ET (ETa) and meteorological data to project ETa over a wide range of biome types and scales of measurement, from local to global estimates. The majority of these use time-series imagery from the Moderate Resolution Imaging Spectrometer on the Terra satellite to project ET over seasons and years. The review explores the theoretical basis for the methods, the types of ancillary data needed, and their accuracy and limitations. Coefficients of determination between modeled ETa and measured ETa are in the range of 0.45–0.95, and root mean square errors are in the range of 10–30% of mean ETa values across biomes, similar to methods that use thermal infrared bands to estimate ETa and within the range of accuracy of the ground measurements by which they are calibrated or validated. The advent of frequent-return satellites such as Terra and planed replacement platforms, and the increasing number of moisture and carbon flux tower sites over the globe, have made these methods feasible. Examples of operational algorithms for ET in agricultural and natural ecosystems are presented. The goal of the review is to enable potential end-users from different disciplines to adapt these methods to new applications that require spatially-distributed ET estimates.
Simple method for quick estimation of aquifer hydrogeological parameters
Ma, C.; Li, Y. Y.
2017-08-01
Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.
Modal estimation by FBG for flexible structures attitude control
Jiang, Hao; Van Der Veek, B.; Dolk, V.; Kirk, D.; Gutierrez, H.
2014-01-01
This work investigates an online mode shape estimation method to estimate the time-varying modal properties and correct IMU readings in real time using distributed strain measurements of FBG sensor arrays. Compared to the notch filter approach, the proposed method removes structural vibration
Advances in Time Estimation Methods for Molecular Data.
Kumar, Sudhir; Hedges, S Blair
2016-04-01
Molecular dating has become central to placing a temporal dimension on the tree of life. Methods for estimating divergence times have been developed for over 50 years, beginning with the proposal of molecular clock in 1962. We categorize the chronological development of these methods into four generations based on the timing of their origin. In the first generation approaches (1960s-1980s), a strict molecular clock was assumed to date divergences. In the second generation approaches (1990s), the equality of evolutionary rates between species was first tested and then a strict molecular clock applied to estimate divergence times. The third generation approaches (since ∼2000) account for differences in evolutionary rates across the tree by using a statistical model, obviating the need to assume a clock or to test the equality of evolutionary rates among species. Bayesian methods in the third generation require a specific or uniform prior on the speciation-process and enable the inclusion of uncertainty in clock calibrations. The fourth generation approaches (since 2012) allow rates to vary from branch to branch, but do not need prior selection of a statistical model to describe the rate variation or the specification of speciation model. With high accuracy, comparable to Bayesian approaches, and speeds that are orders of magnitude faster, fourth generation methods are able to produce reliable timetrees of thousands of species using genome scale data. We found that early time estimates from second generation studies are similar to those of third and fourth generation studies, indicating that methodological advances have not fundamentally altered the timetree of life, but rather have facilitated time estimation by enabling the inclusion of more species. Nonetheless, we feel an urgent need for testing the accuracy and precision of third and fourth generation methods, including their robustness to misspecification of priors in the analysis of large phylogenies and data
Estimating incidence from prevalence in generalised HIV epidemics: methods and validation.
Directory of Open Access Journals (Sweden)
Timothy B Hallett
2008-04-01
Full Text Available HIV surveillance of generalised epidemics in Africa primarily relies on prevalence at antenatal clinics, but estimates of incidence in the general population would be more useful. Repeated cross-sectional measures of HIV prevalence are now becoming available for general populations in many countries, and we aim to develop and validate methods that use these data to estimate HIV incidence.Two methods were developed that decompose observed changes in prevalence between two serosurveys into the contributions of new infections and mortality. Method 1 uses cohort mortality rates, and method 2 uses information on survival after infection. The performance of these two methods was assessed using simulated data from a mathematical model and actual data from three community-based cohort studies in Africa. Comparison with simulated data indicated that these methods can accurately estimates incidence rates and changes in incidence in a variety of epidemic conditions. Method 1 is simple to implement but relies on locally appropriate mortality data, whilst method 2 can make use of the same survival distribution in a wide range of scenarios. The estimates from both methods are within the 95% confidence intervals of almost all actual measurements of HIV incidence in adults and young people, and the patterns of incidence over age are correctly captured.It is possible to estimate incidence from cross-sectional prevalence data with sufficient accuracy to monitor the HIV epidemic. Although these methods will theoretically work in any context, we have able to test them only in southern and eastern Africa, where HIV epidemics are mature and generalised. The choice of method will depend on the local availability of HIV mortality data.
Estimation of regional-scale groundwater flow properties in the Bengal Basin of India and Bangladesh
Michael, H.A.; Voss, C.I.
2009-01-01
Quantitative evaluation of management strategies for long-term supply of safe groundwater for drinking from the Bengal Basin aquifer (India and Bangladesh) requires estimation of the large-scale hydrogeologic properties that control flow. The Basin consists of a stratified, heterogeneous sequence of sediments with aquitards that may separate aquifers locally, but evidence does not support existence of regional confining units. Considered at a large scale, the Basin may be aptly described as a single aquifer with higher horizontal than vertical hydraulic conductivity. Though data are sparse, estimation of regional-scale aquifer properties is possible from three existing data types: hydraulic heads, 14C concentrations, and driller logs. Estimation is carried out with inverse groundwater modeling using measured heads, by model calibration using estimated water ages based on 14C, and by statistical analysis of driller logs. Similar estimates of hydraulic conductivities result from all three data types; a resulting typical value of vertical anisotropy (ratio of horizontal to vertical conductivity) is 104. The vertical anisotropy estimate is supported by simulation of flow through geostatistical fields consistent with driller log data. The high estimated value of vertical anisotropy in hydraulic conductivity indicates that even disconnected aquitards, if numerous, can strongly control the equivalent hydraulic parameters of an aquifer system. ?? US Government 2009.
Efficient Methods of Estimating Switchgrass Biomass Supplies
Switchgrass (Panicum virgatum L.) is being developed as a biofuel feedstock for the United States. Efficient and accurate methods to estimate switchgrass biomass feedstock supply within a production area will be required by biorefineries. Our main objective was to determine the effectiveness of in...
Methods of albumin estimation in clinical biochemistry: Past, present, and future.
Kumar, Deepak; Banerjee, Dibyajyoti
2017-06-01
Estimation of serum and urinary albumin is routinely performed in clinical biochemistry laboratories. In the past, precipitation-based methods were popular for estimation of human serum albumin (HSA). Currently, dye-binding or immunochemical methods are widely practiced. Each of these methods has its limitations. Research endeavors to overcome such limitations are on-going. The current trends in methodological aspects of albumin estimation guiding the field have not been reviewed. Therefore, it is the need of the hour to review several aspects of albumin estimation. The present review focuses on the modern trends of research from a conceptual point of view and gives an overview of recent developments to offer the readers a comprehensive understanding of the subject. Copyright © 2017 Elsevier B.V. All rights reserved.
Prasad, A.; Howells, A. E.; Shock, E.
2017-12-01
The biological fate of any metal depends on its chemical form in the environment. Arsenic for example, is extremely toxic in the form of inorganic As+3 but completely benign in the organic form of arsenobetaine. Thus, given an exhaustive set of reactions and their equilibrium constants (logK), the bioavailability of any metal can be obtained for blood plasma, hydrothermal fluids or any system of interest. While many data exist for metal-inorganic ligands, logK data covering the temperature range of life for metal-organic complexes are sparse. Hence, we decided to estimate metal-organic logK values from correlations with the commonly available values of ligand pKa. Metal ion specific correlations were made with ligands classified according to their electron donor atoms, denticity and other chemical factors. While this approach has been employed before (Carbonaro et al. 2007, GCA 71, 3958-3968), new correlations were developed that provide estimates even when no metal-organic logK is available. In addition, we have used the same methods to make estimates of metal-organic entropy of association (ΔaS), which can provide logK for any temperature of biological relevance. Our current correlations employ logK and ΔaS data from 30 metal ions (like the biologically relevant Fe+3 & Zn+2) and 74 ligands (like formate and ethylenediamine), which can be expanded to estimate the metal-ligand reaction properties for these 30 metal ions with a possibly limitless number of ligands that may belong to our categories of ligands. With the help of such data, copper speciation was obtained for a defined growth medium for methanotrophs employed by Morton et al. (2000, AEM 66, 1730-1733) that agrees with experimental measurements showing that the free metal ion may not be the bioavailable form in all conditions. These results encourage us to keep filling the gaps in metal-organic logK data and continue finding relationships between biological responses (like metal-accumulation ratios
Improvement of Source Number Estimation Method for Single Channel Signal.
Directory of Open Access Journals (Sweden)
Zhi Dong
Full Text Available Source number estimation methods for single channel signal have been investigated and the improvements for each method are suggested in this work. Firstly, the single channel data is converted to multi-channel form by delay process. Then, algorithms used in the array signal processing, such as Gerschgorin's disk estimation (GDE and minimum description length (MDL, are introduced to estimate the source number of the received signal. The previous results have shown that the MDL based on information theoretic criteria (ITC obtains a superior performance than GDE at low SNR. However it has no ability to handle the signals containing colored noise. On the contrary, the GDE method can eliminate the influence of colored noise. Nevertheless, its performance at low SNR is not satisfactory. In order to solve these problems and contradictions, the work makes remarkable improvements on these two methods on account of the above consideration. A diagonal loading technique is employed to ameliorate the MDL method and a jackknife technique is referenced to optimize the data covariance matrix in order to improve the performance of the GDE method. The results of simulation have illustrated that the performance of original methods have been promoted largely.
Improved Battery Parameter Estimation Method Considering Operating Scenarios for HEV/EV Applications
Directory of Open Access Journals (Sweden)
Jufeng Yang
2016-12-01
Full Text Available This paper presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted dataset is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.
Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods
Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.
2011-01-01
Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.
DEFF Research Database (Denmark)
Frutiger, Jerome; Marcarie, Camille; Abildskov, Jens
2016-01-01
of the prediction. The methodology is evaluated through development of a GC method for the prediction of the heat of combustion (ΔHco) for pure components. The results showed that robust regression lead to best performance statistics for parameter estimation. The bootstrap method is found to be a valid alternative......A rigorous methodology is developed that addresses numerical and statistical issues when developing group contribution (GC) based property models such as regression methods, optimization algorithms, performance statistics, outlier treatment, parameter identifiability, and uncertainty...... identifiability issues, reporting of the 95% confidence intervals of the predicted property values should be mandatory as opposed to reporting only single value prediction, currently the norm in literature. Moreover, inclusion of higher order groups (additional parameters) does not always lead to improved...
Stochastic Estimation via Polynomial Chaos
2015-10-01
AFRL-RW-EG-TR-2015-108 Stochastic Estimation via Polynomial Chaos Douglas V. Nance Air Force Research...COVERED (From - To) 20-04-2015 – 07-08-2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Stochastic Estimation via Polynomial Chaos ...This expository report discusses fundamental aspects of the polynomial chaos method for representing the properties of second order stochastic
Pipeline heating method based on optimal control and state estimation
Energy Technology Data Exchange (ETDEWEB)
Vianna, F.L.V. [Dept. of Subsea Technology. Petrobras Research and Development Center - CENPES, Rio de Janeiro, RJ (Brazil)], e-mail: fvianna@petrobras.com.br; Orlande, H.R.B. [Dept. of Mechanical Engineering. POLI/COPPE, Federal University of Rio de Janeiro - UFRJ, Rio de Janeiro, RJ (Brazil)], e-mail: helcio@mecanica.ufrj.br; Dulikravich, G.S. [Dept. of Mechanical and Materials Engineering. Florida International University - FIU, Miami, FL (United States)], e-mail: dulikrav@fiu.edu
2010-07-01
In production of oil and gas wells in deep waters the flowing of hydrocarbon through pipeline is a challenging problem. This environment presents high hydrostatic pressures and low sea bed temperatures, which can favor the formation of solid deposits that in critical operating conditions, as unplanned shutdown conditions, may result in a pipeline blockage and consequently incur in large financial losses. There are different methods to protect the system, but nowadays thermal insulation and chemical injection are the standard solutions normally used. An alternative method of flow assurance is to heat the pipeline. This concept, which is known as active heating system, aims at heating the produced fluid temperature above a safe reference level in order to avoid the formation of solid deposits. The objective of this paper is to introduce a Bayesian statistical approach for the state estimation problem, in which the state variables are considered as the transient temperatures within a pipeline cross-section, and to use the optimal control theory as a design tool for a typical heating system during a simulated shutdown condition. An application example is presented to illustrate how Bayesian filters can be used to reconstruct the temperature field from temperature measurements supposedly available on the external surface of the pipeline. The temperatures predicted with the Bayesian filter are then utilized in a control approach for a heating system used to maintain the temperature within the pipeline above the critical temperature of formation of solid deposits. The physical problem consists of a pipeline cross section represented by a circular domain with four points over the pipe wall representing heating cables. The fluid is considered stagnant, homogeneous, isotropic and with constant thermo-physical properties. The mathematical formulation governing the direct problem was solved with the finite volume method and for the solution of the state estimation problem
Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method
Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey
2013-01-01
Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…
The estimation of the measurement results with using statistical methods
International Nuclear Information System (INIS)
Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" data-affiliation=" (State Enterprise Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" >Velychko, O; UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" data-affiliation=" (State Scientific Institution UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" >Gordiyenko, T
2015-01-01
The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed
The estimation of the measurement results with using statistical methods
Velychko, O.; Gordiyenko, T.
2015-02-01
The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.
Power system frequency estimation based on an orthogonal decomposition method
Lee, Chih-Hung; Tsai, Men-Shen
2018-06-01
In recent years, several frequency estimation techniques have been proposed by which to estimate the frequency variations in power systems. In order to properly identify power quality issues under asynchronously-sampled signals that are contaminated with noise, flicker, and harmonic and inter-harmonic components, a good frequency estimator that is able to estimate the frequency as well as the rate of frequency changes precisely is needed. However, accurately estimating the fundamental frequency becomes a very difficult task without a priori information about the sampling frequency. In this paper, a better frequency evaluation scheme for power systems is proposed. This method employs a reconstruction technique in combination with orthogonal filters, which may maintain the required frequency characteristics of the orthogonal filters and improve the overall efficiency of power system monitoring through two-stage sliding discrete Fourier transforms. The results showed that this method can accurately estimate the power system frequency under different conditions, including asynchronously sampled signals contaminated by noise, flicker, and harmonic and inter-harmonic components. The proposed approach also provides high computational efficiency.
Comparison of methods for estimating herbage intake in grazing dairy cows
DEFF Research Database (Denmark)
Hellwing, Anne Louise Frydendahl; Lund, Peter; Weisbjerg, Martin Riis
2015-01-01
Estimation of herbage intake is a challenge both under practical and experimental conditions. The aim of this study was to estimate herbage intake with different methods for cows grazing 7 h daily on either spring or autumn pastures. In order to generate variation between cows, the 20 cows per...... season, and the herbage intake was estimated twice during each season. Cows were on pasture from 8:00 until 15:00, and were subsequently housed inside and fed a mixed ration (MR) based on maize silage ad libitum. Herbage intake was estimated with nine different methods: (1) animal performance (2) intake...
A service based estimation method for MPSoC performance modelling
DEFF Research Database (Denmark)
Tranberg-Hansen, Anders Sejer; Madsen, Jan; Jensen, Bjørn Sand
2008-01-01
This paper presents an abstract service based estimation method for MPSoC performance modelling which allows fast, cycle accurate design space exploration of complex architectures including multi processor configurations at a very early stage in the design phase. The modelling method uses a service...... oriented model of computation based on Hierarchical Colored Petri Nets and allows the modelling of both software and hardware in one unified model. To illustrate the potential of the method, a small MPSoC system, developed at Bang & Olufsen ICEpower a/s, is modelled and performance estimates are produced...
Seismic Methods of Identifying Explosions and Estimating Their Yield
Walter, W. R.; Ford, S. R.; Pasyanos, M.; Pyle, M. L.; Myers, S. C.; Mellors, R. J.; Pitarka, A.; Rodgers, A. J.; Hauk, T. F.
2014-12-01
Seismology plays a key national security role in detecting, locating, identifying and determining the yield of explosions from a variety of causes, including accidents, terrorist attacks and nuclear testing treaty violations (e.g. Koper et al., 2003, 1999; Walter et al. 1995). A collection of mainly empirical forensic techniques has been successfully developed over many years to obtain source information on explosions from their seismic signatures (e.g. Bowers and Selby, 2009). However a lesson from the three DPRK declared nuclear explosions since 2006, is that our historic collection of data may not be representative of future nuclear test signatures (e.g. Selby et al., 2012). To have confidence in identifying future explosions amongst the background of other seismic signals, and accurately estimate their yield, we need to put our empirical methods on a firmer physical footing. Goals of current research are to improve our physical understanding of the mechanisms of explosion generation of S- and surface-waves, and to advance our ability to numerically model and predict them. As part of that process we are re-examining regional seismic data from a variety of nuclear test sites including the DPRK and the former Nevada Test Site (now the Nevada National Security Site (NNSS)). Newer relative location and amplitude techniques can be employed to better quantify differences between explosions and used to understand those differences in term of depth, media and other properties. We are also making use of the Source Physics Experiments (SPE) at NNSS. The SPE chemical explosions are explicitly designed to improve our understanding of emplacement and source material effects on the generation of shear and surface waves (e.g. Snelson et al., 2013). Finally we are also exploring the value of combining seismic information with other technologies including acoustic and InSAR techniques to better understand the source characteristics. Our goal is to improve our explosion models
Focused ultrasound transducer spatial peak intensity estimation: a comparison of methods
Civale, John; Rivens, Ian; Shaw, Adam; ter Haar, Gail
2018-03-01
Characterisation of the spatial peak intensity at the focus of high intensity focused ultrasound transducers is difficult because of the risk of damage to hydrophone sensors at the high focal pressures generated. Hill et al (1994 Ultrasound Med. Biol. 20 259-69) provided a simple equation for estimating spatial-peak intensity for solid spherical bowl transducers using measured acoustic power and focal beamwidth. This paper demonstrates theoretically and experimentally that this expression is only strictly valid for spherical bowl transducers without a central (imaging) aperture. A hole in the centre of the transducer results in over-estimation of the peak intensity. Improved strategies for determining focal peak intensity from a measurement of total acoustic power are proposed. Four methods are compared: (i) a solid spherical bowl approximation (after Hill et al 1994 Ultrasound Med. Biol. 20 259-69), (ii) a numerical method derived from theory, (iii) a method using measured sidelobe to focal peak pressure ratio, and (iv) a method for measuring the focal power fraction (FPF) experimentally. Spatial-peak intensities were estimated for 8 transducers at three drive powers levels: low (approximately 1 W), moderate (~10 W) and high (20-70 W). The calculated intensities were compared with those derived from focal peak pressure measurements made using a calibrated hydrophone. The FPF measurement method was found to provide focal peak intensity estimates that agreed most closely (within 15%) with the hydrophone measurements, followed by the pressure ratio method (within 20%). The numerical method was found to consistently over-estimate focal peak intensity (+40% on average), however, for transducers with a central hole it was more accurate than using the solid bowl assumption (+70% over-estimation). In conclusion, the ability to make use of an automated beam plotting system, and a hydrophone with good spatial resolution, greatly facilitates characterisation of the FPF, and
Novel Method for 5G Systems NLOS Channels Parameter Estimation
Directory of Open Access Journals (Sweden)
Vladeta Milenkovic
2017-01-01
Full Text Available For the development of new 5G systems to operate in mm bands, there is a need for accurate radio propagation modelling at these bands. In this paper novel approach for NLOS channels parameter estimation will be presented. Estimation will be performed based on LCR performance measure, which will enable us to estimate propagation parameters in real time and to avoid weaknesses of ML and moment method estimation approaches.
A Posteriori Error Estimation for Finite Element Methods and Iterative Linear Solvers
Energy Technology Data Exchange (ETDEWEB)
Melboe, Hallgeir
2001-10-01
This thesis addresses a posteriori error estimation for finite element methods and iterative linear solvers. Adaptive finite element methods have gained a lot of popularity over the last decades due to their ability to produce accurate results with limited computer power. In these methods a posteriori error estimates play an essential role. Not only do they give information about how large the total error is, they also indicate which parts of the computational domain should be given a more sophisticated treatment in order to reduce the error. A posteriori error estimates are traditionally aimed at estimating the global error, but more recently so called goal oriented error estimators have been shown a lot of interest. The name reflects the fact that they estimate the error in user-defined local quantities. In this thesis the main focus is on global error estimators for highly stretched grids and goal oriented error estimators for flow problems on regular grids. Numerical methods for partial differential equations, such as finite element methods and other similar techniques, typically result in a linear system of equations that needs to be solved. Usually such systems are solved using some iterative procedure which due to a finite number of iterations introduces an additional error. Most such algorithms apply the residual in the stopping criterion, whereas the control of the actual error may be rather poor. A secondary focus in this thesis is on estimating the errors that are introduced during this last part of the solution procedure. The thesis contains new theoretical results regarding the behaviour of some well known, and a few new, a posteriori error estimators for finite element methods on anisotropic grids. Further, a goal oriented strategy for the computation of forces in flow problems is devised and investigated. Finally, an approach for estimating the actual errors associated with the iterative solution of linear systems of equations is suggested. (author)
New methods for estimating follow-up rates in cohort studies
Directory of Open Access Journals (Sweden)
Xiaonan Xue
2017-12-01
Full Text Available Abstract Background The follow-up rate, a standard index of the completeness of follow-up, is important for assessing the validity of a cohort study. A common method for estimating the follow-up rate, the “Percentage Method”, defined as the fraction of all enrollees who developed the event of interest or had complete follow-up, can severely underestimate the degree of follow-up. Alternatively, the median follow-up time does not indicate the completeness of follow-up, and the reverse Kaplan-Meier based method and Clark’s Completeness Index (CCI also have limitations. Methods We propose a new definition for the follow-up rate, the Person-Time Follow-up Rate (PTFR, which is the observed person-time divided by total person-time assuming no dropouts. The PTFR cannot be calculated directly since the event times for dropouts are not observed. Therefore, two estimation methods are proposed: a formal person-time method (FPT in which the expected total follow-up time is calculated using the event rate estimated from the observed data, and a simplified person-time method (SPT that avoids estimation of the event rate by assigning full follow-up time to all events. Simulations were conducted to measure the accuracy of each method, and each method was applied to a prostate cancer recurrence study dataset. Results Simulation results showed that the FPT has the highest accuracy overall. In most situations, the computationally simpler SPT and CCI methods are only slightly biased. When applied to a retrospective cohort study of cancer recurrence, the FPT, CCI and SPT showed substantially greater 5-year follow-up than the Percentage Method (92%, 92% and 93% vs 68%. Conclusions The Person-time methods correct a systematic error in the standard Percentage Method for calculating follow-up rates. The easy to use SPT and CCI methods can be used in tandem to obtain an accurate and tight interval for PTFR. However, the FPT is recommended when event rates and
Dobriyal, Pariva; Qureshi, Ashi; Badola, Ruchi; Hussain, Syed Ainul
2012-08-01
SummaryThe maintenance of elevated soil moisture is an important ecosystem service of the natural ecosystems. Understanding the patterns of soil moisture distribution is useful to a wide range of agencies concerned with the weather and climate, soil conservation, agricultural production and landscape management. However, the great heterogeneity in the spatial and temporal distribution of soil moisture and the lack of standard methods to estimate this property limit its quantification and use in research. This literature based review aims to (i) compile the available knowledge on the methods used to estimate soil moisture at the landscape level, (ii) compare and evaluate the available methods on the basis of common parameters such as resource efficiency, accuracy of results and spatial coverage and (iii) identify the method that will be most useful for forested landscapes in developing countries. On the basis of the strengths and weaknesses of each of the methods reviewed we conclude that the direct method (gravimetric method) is accurate and inexpensive but is destructive, slow and time consuming and does not allow replications thereby having limited spatial coverage. The suitability of indirect methods depends on the cost, accuracy, response time, effort involved in installation, management and durability of the equipment. Our review concludes that measurements of soil moisture using the Time Domain Reflectometry (TDR) and Ground Penetrating Radar (GPR) methods are instantaneously obtained and accurate. GPR may be used over larger areas (up to 500 × 500 m a day) but is not cost-effective and difficult to use in forested landscapes in comparison to TDR. This review will be helpful to researchers, foresters, natural resource managers and agricultural scientists in selecting the appropriate method for estimation of soil moisture keeping in view the time and resources available to them and to generate information for efficient allocation of water resources and
Scott, Elaine P.
1996-01-01
A thermal stress analysis is an important aspect in the design of aerospace structures and vehicles such as the High Speed Civil Transport (HSCT) at the National Aeronautics and Space Administration Langley Research Center (NASA-LaRC). These structures are complex and are often composed of numerous components fabricated from a variety of different materials. The thermal loads on these structures induce temperature variations within the structure, which in turn result in the development of thermal stresses. Therefore, a thermal stress analysis requires knowledge of the temperature distributions within the structures which consequently necessitates the need for accurate knowledge of the thermal properties, boundary conditions and thermal interface conditions associated with the structural materials. The goal of this proposed multi-year research effort was to develop estimation methodologies for the determination of the thermal properties and interface conditions associated with aerospace vehicles. Specific objectives focused on the development and implementation of optimal experimental design strategies and methodologies for the estimation of thermal properties associated with simple composite and honeycomb structures. The strategy used in this multi-year research effort was to first develop methodologies for relatively simple systems and then systematically modify these methodologies to analyze complex structures. This can be thought of as a building block approach. This strategy was intended to promote maximum usability of the resulting estimation procedure by NASA-LARC researchers through the design of in-house experimentation procedures and through the use of an existing general purpose finite element software.
Morrow, Thomas E.; Behring, II, Kendricks A.
2004-03-09
A method to determine thermodynamic properties of a natural gas hydrocarbon, when the speed of sound in the gas is known at an arbitrary temperature and pressure. Thus, the known parameters are the sound speed, temperature, pressure, and concentrations of any dilute components of the gas. The method uses a set of reference gases and their calculated density and speed of sound values to estimate the density of the subject gas. Additional calculations can be made to estimate the molecular weight of the subject gas, which can then be used as the basis for mass flow calculations, to determine the speed of sound at standard pressure and temperature, and to determine various thermophysical characteristics of the gas.
Methods for Measuring and Estimating Methane Emission from Ruminants
Directory of Open Access Journals (Sweden)
Jørgen Madsen
2012-04-01
Full Text Available This paper is a brief introduction to the different methods used to quantify the enteric methane emission from ruminants. A thorough knowledge of the advantages and disadvantages of these methods is very important in order to plan experiments, understand and interpret experimental results, and compare them with other studies. The aim of the paper is to describe the principles, advantages and disadvantages of different methods used to quantify the enteric methane emission from ruminants. The best-known methods: Chambers/respiration chambers, SF6 technique and in vitro gas production technique and the newer CO2 methods are described. Model estimations, which are used to calculate national budget and single cow enteric emission from intake and diet composition, are also discussed. Other methods under development such as the micrometeorological technique, combined feeder and CH4 analyzer and proxy methods are briefly mentioned. Methods of choice for estimating enteric methane emission depend on aim, equipment, knowledge, time and money available, but interpretation of results obtained with a given method can be improved if knowledge about the disadvantages and advantages are used in the planning of experiments.
International Nuclear Information System (INIS)
Oyama, Takahiro; Suzuki, Koichi
2006-01-01
Chemical weathering, porewater squeezing and physical properties for the sedimentary rocks were examined. Chemical weathering potential of rocks was described by the sulfur as a acceleration factor of weathering and carbonate contents as a neutralization factor of it. The carbonate contents in the rocks were measured accurately by the gas pressure measurement method. Pore water squeezing method was applied for the semi-hard sedimentary rocks (Opalinusclay). The chemical change of extracted pore water under high pressure conditions was estimated. Physical property of sedimentary rocks have relationship among the porosity and permeability and resistivity coefficient in the same rock types. It is possible to estimate the water permeability from the geophysical tests. (author)
Vishwanath, Karthik; Chang, Kevin; Klein, Daniel; Deng, Yu Feng; Chang, Vivide; Phelps, Janelle E; Ramanujam, Nimmi
2011-02-01
Steady-state diffuse reflection spectroscopy is a well-studied optical technique that can provide a noninvasive and quantitative method for characterizing the absorption and scattering properties of biological tissues. Here, we compare three fiber-based diffuse reflection spectroscopy systems that were assembled to create a light-weight, portable, and robust optical spectrometer that could be easily translated for repeated and reliable use in mobile settings. The three systems were built using a broadband light source and a compact, commercially available spectrograph. We tested two different light sources and two spectrographs (manufactured by two different vendors). The assembled systems were characterized by their signal-to-noise ratios, the source-intensity drifts, and detector linearity. We quantified the performance of these instruments in extracting optical properties from diffuse reflectance spectra in tissue-mimicking liquid phantoms with well-controlled optical absorption and scattering coefficients. We show that all assembled systems were able to extract the optical absorption and scattering properties with errors less than 10%, while providing greater than ten-fold decrease in footprint and cost (relative to a previously well-characterized and widely used commercial system). Finally, we demonstrate the use of these small systems to measure optical biomarkers in vivo in a small-animal model cancer therapy study. We show that optical measurements from the simple portable system provide estimates of tumor oxygen saturation similar to those detected using the commercial system in murine tumor models of head and neck cancer.
Hexographic Method of Complex Town-Planning Terrain Estimate
Khudyakov, A. Ju
2017-11-01
The article deals with the vital problem of a complex town-planning analysis based on the “hexographic” graphic analytic method, makes a comparison with conventional terrain estimate methods and contains the method application examples. It discloses a procedure of the author’s estimate of restrictions and building of a mathematical model which reflects not only conventional town-planning restrictions, but also social and aesthetic aspects of the analyzed territory. The method allows one to quickly get an idea of the territory potential. It is possible to use an unlimited number of estimated factors. The method can be used for the integrated assessment of urban areas. In addition, it is possible to use the methods of preliminary evaluation of the territory commercial attractiveness in the preparation of investment projects. The technique application results in simple informative graphics. Graphical interpretation is straightforward from the experts. A definite advantage is the free perception of the subject results as they are not prepared professionally. Thus, it is possible to build a dialogue between professionals and the public on a new level allowing to take into account the interests of various parties. At the moment, the method is used as a tool for the preparation of integrated urban development projects at the Department of Architecture in Federal State Autonomous Educational Institution of Higher Education “South Ural State University (National Research University)”, FSAEIHE SUSU (NRU). The methodology is included in a course of lectures as the material on architectural and urban design for architecture students. The same methodology was successfully tested in the preparation of business strategies for the development of some territories in the Chelyabinsk region. This publication is the first in a series of planned activities developing and describing the methodology of hexographical analysis in urban and architectural practice. It is also
DEFF Research Database (Denmark)
Cunico, Larissa; Hukkerikar, Amol; Ceriani, Roberta
2013-01-01
The paper is a review of the combined group contribution (GC)–atom connectivity index (CI) approachfor prediction of physical and thermodynamic properties of organic chemicals and their mixtures withspecial emphasis on lipids. The combined approach employs carefully selected datasets of different...... dependent, have been developed. For mixtures, properties related to phase equilibria aremodeled with GE-based models (UNIQUAC, UNIFAC, NRTL, and combined UNIFAC-CI method). The col-lected phase equilibrium data for VLE and SLE have been tested for thermodynamic consistency togetherwith a performance...... evaluation of the GE-models. The paper also reviews the role of the databases andthe mathematical and thermodynamic consistency of the measured/estimated data and the predictivenature of the developed models....
Weres, Jerzy; Kujawa, Sebastian; Olek, Wiesław; Czajkowski, Łukasz
2016-04-01
Knowledge of physical properties of biomaterials is important in understanding and designing agri-food and wood processing industries. In the study presented in this paper computational methods were developed and combined with experiments to enhance identification of agri-food and forest product properties, and to predict heat and water transport in such products. They were based on the finite element model of heat and water transport and supplemented with experimental data. Algorithms were proposed for image processing, geometry meshing, and inverse/direct finite element modelling. The resulting software system was composed of integrated subsystems for 3D geometry data acquisition and mesh generation, for 3D geometry modelling and visualization, and for inverse/direct problem computations for the heat and water transport processes. Auxiliary packages were developed to assess performance, accuracy and unification of data access. The software was validated by identifying selected properties and using the estimated values to predict the examined processes, and then comparing predictions to experimental data. The geometry, thermal conductivity, specific heat, coefficient of water diffusion, equilibrium water content and convective heat and water transfer coefficients in the boundary layer were analysed. The estimated values, used as an input for simulation of the examined processes, enabled reduction in the uncertainty associated with predictions.
Physical-chemical property based sequence motifs and methods regarding same
Braun, Werner [Friendswood, TX; Mathura, Venkatarajan S [Sarasota, FL; Schein, Catherine H [Friendswood, TX
2008-09-09
A data analysis system, program, and/or method, e.g., a data mining/data exploration method, using physical-chemical property motifs. For example, a sequence database may be searched for identifying segments thereof having physical-chemical properties similar to the physical-chemical property motifs.
Training Methods for Image Noise Level Estimation on Wavelet Components
Directory of Open Access Journals (Sweden)
A. De Stefano
2004-12-01
Full Text Available The estimation of the standard deviation of noise contaminating an image is a fundamental step in wavelet-based noise reduction techniques. The method widely used is based on the mean absolute deviation (MAD. This model-based method assumes specific characteristics of the noise-contaminated image component. Three novel and alternative methods for estimating the noise standard deviation are proposed in this work and compared with the MAD method. Two of these methods rely on a preliminary training stage in order to extract parameters which are then used in the application stage. The sets used for training and testing, 13 and 5 images, respectively, are fully disjoint. The third method assumes specific statistical distributions for image and noise components. Results showed the prevalence of the training-based methods for the images and the range of noise levels considered.
Huang, Hening
2018-01-01
This paper is the second (Part II) in a series of two papers (Part I and Part II). Part I has quantitatively discussed the fundamental limitations of the t-interval method for uncertainty estimation with a small number of measurements. This paper (Part II) reveals that the t-interval is an ‘exact’ answer to a wrong question; it is actually misused in uncertainty estimation. This paper proposes a redefinition of uncertainty, based on the classical theory of errors and the theory of point estimation, and a modification of the conventional approach to estimating measurement uncertainty. It also presents an asymptotic procedure for estimating the z-interval. The proposed modification is to replace the t-based uncertainty with an uncertainty estimator (mean- or median-unbiased). The uncertainty estimator method is an approximate answer to the right question to uncertainty estimation. The modified approach provides realistic estimates of uncertainty, regardless of whether the population standard deviation is known or unknown, or if the sample size is small or large. As an application example of the modified approach, this paper presents a resolution to the Du-Yang paradox (i.e. Paradox 2), one of the three paradoxes caused by the misuse of the t-interval in uncertainty estimation.
Lake and Reservoir Evaporation Estimation: Sensitivity Analysis and Ranking Existing Methods
Directory of Open Access Journals (Sweden)
maysam majidi
2016-02-01
Full Text Available Introduction: Water when harvested is commonly stored in dams, but approximately up to half of it may be lost due to evaporation leading to a huge waste of our resources. Estimating evaporation from lakes and reservoirs is not a simple task as there are a number of factors that can affect the evaporation rate, notably the climate and physiography of the water body and its surroundings. Several methods are currently used to predict evaporation from meteorological data in open water reservoirs. Based on the accuracy and simplicity of the application, each of these methods has advantages and disadvantages. Although evaporation pan method is well known to have significant uncertainties both in magnitude and timing, it is extensively used in Iran because of its simplicity. Evaporation pan provides a measurement of the combined effect of temperature, humidity, wind speed and solar radiation on the evaporation. However, they may not be adequate for the reservoir operations/development and water accounting strategies for managing drinking water in arid and semi-arid conditions which require accurate evaporation estimates. However, there has not been a consensus on which methods were better to employ due to the lack of important long-term measured data such as temperature profile, radiation and heat fluxes in most lakes and reservoirs in Iran. Consequently, we initiated this research to find the best cost−effective evaporation method with possibly fewer data requirements in our study area, i.e. the Doosti dam reservoir which is located in a semi-arid region of Iran. Materials and Methods: Our study site was the Doosti dam reservoir located between Iran and Turkmenistan borders, which was constructed by the Ministry of Water and Land Reclamation of the Republic of Turkmenistan and the Khorasan Razavi Regional Water Board of the Islamic Republic of Iran. Meteorological data including maximum and minimum air temperature and evaporation from class A pan
An improved method to estimate reflectance parameters for high dynamic range imaging
Li, Shiying; Deguchi, Koichiro; Li, Renfa; Manabe, Yoshitsugu; Chihara, Kunihiro
2008-01-01
Two methods are described to accurately estimate diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness, over the dynamic range of the camera used to capture input images. Neither method needs to segment color areas on an image, or to reconstruct a high dynamic range (HDR) image. The second method improves on the first, bypassing the requirement for specific separation of diffuse and specular reflection components. For the latter method, diffuse and specular reflectance parameters are estimated separately, using the least squares method. Reflection values are initially assumed to be diffuse-only reflection components, and are subjected to the least squares method to estimate diffuse reflectance parameters. Specular reflection components, obtained by subtracting the computed diffuse reflection components from reflection values, are then subjected to a logarithmically transformed equation of the Torrance-Sparrow reflection model, and specular reflectance parameters for gloss intensity and surface roughness are finally estimated using the least squares method. Experiments were carried out using both methods, with simulation data at different saturation levels, generated according to the Lambert and Torrance-Sparrow reflection models, and the second method, with spectral images captured by an imaging spectrograph and a moving light source. Our results show that the second method can estimate the diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness more accurately and faster than the first one, so that colors and gloss can be reproduced more efficiently for HDR imaging.
DEFF Research Database (Denmark)
Nielsen, Morten Ø.; Frederiksen, Per Houmann
2005-01-01
In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods. The es...... the time domain parametric methods, and (4) without sufficient trimming of scales the wavelet-based estimators are heavily biased.......In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods....... The estimators are briefly introduced and compared, and the criteria adopted for measuring finite sample performance are bias and root mean squared error. Most importantly, the simulations reveal that (1) the frequency domain maximum likelihood procedure is superior to the time domain parametric methods, (2) all...
Directory of Open Access Journals (Sweden)
Yu-Hua Zhang
2017-01-01
Full Text Available Residual stress has significant influence on the performance of mechanical components, and the nondestructive estimation of residual stress is always a difficult problem. This study applies the relative nonlinear coefficient of critical refraction longitudinal (LCR wave to nondestructively characterize the stress state of materials; the feasibility of residual stress estimation using the nonlinear property of LCR wave is verified. The nonlinear ultrasonic measurements based on LCR wave are conducted on components with known stress state to calculate the relative nonlinear coefficient. Experimental results indicate that the relative nonlinear coefficient monotonically increases with prestress and the increment of relative nonlinear coefficient is about 80%, while the wave velocity only decreases about 0.2%. The sensitivity of the relative nonlinear coefficient for stress is much higher than wave velocity. Furthermore, the dependence between the relative nonlinear coefficient and deformation state of components is found. The stress detection resolution based on the nonlinear property of LCR wave is 10 MPa, which has higher resolution than wave velocity. These results demonstrate that the nonlinear property of LCR wave is more suitable for stress characterization than wave velocity, and this quantitative information could be used for residual stress estimation.
Public-Private Investment Partnerships: Efficiency Estimation Methods
Directory of Open Access Journals (Sweden)
Aleksandr Valeryevich Trynov
2016-06-01
Full Text Available The article focuses on assessing the effectiveness of investment projects implemented on the principles of public-private partnership (PPP. This article puts forward the hypothesis that the inclusion of multiplicative economic effects will increase the attractiveness of public-private partnership projects, which in turn will contribute to the more efficient use of budgetary resources. The author proposed a methodological approach and methods of evaluating the economic efficiency of PPP projects. The author’s technique is based upon the synthesis of approaches to evaluation of the project implemented in the private and public sector and in contrast to the existing methods allows taking into account the indirect (multiplicative effect arising during the implementation of project. In the article, to estimate the multiplier effect, the model of regional economy — social accounting matrix (SAM was developed. The matrix is based on the data of the Sverdlovsk region for 2013. In the article, the genesis of the balance models of economic systems is presented. The evolution of balance models in the Russian (Soviet and foreign sources from their emergence up to now are observed. It is shown that SAM is widely used in the world for a wide range of applications, primarily to assess the impact on the regional economy of various exogenous factors. In order to clarify the estimates of multiplicative effects, the disaggregation of the account of the “industry” of the matrix of social accounts was carried out in accordance with the All-Russian Classifier of Types of Economic Activities (OKVED. This step allows to consider the particular characteristics of the industry of the estimated investment project. The method was tested on the example of evaluating the effectiveness of the construction of a toll road in the Sverdlovsk region. It is proved that due to the multiplier effect, the more capital-intensive version of the project may be more beneficial in
Estimation of the specific activity of radioiodinated gonadotrophins: comparison of three methods
Energy Technology Data Exchange (ETDEWEB)
Englebienne, P [Centre for Research and Diagnosis in Endocrinology, Kain (Belgium); Slegers, G [Akademisch Ziekenhuis, Ghent (Belgium). Lab. voor Analytische Chemie
1983-01-14
The authors compared 3 methods for estimating the specific activity of radioiodinated gonadotrophins. Two of the methods (column recovery and isotopic dilution) gave similar results, while the third (autodisplacement) gave significantly higher estimations. In the autodisplacement method, B/T ratios, obtained when either labelled hormone alone, or labelled and unlabelled hormone, are added to the antibody, were compared as estimates of the mass of hormone iodinated. It is likely that immunologically unreactive impurities present in the labelled hormone solution invalidate such comparison.
Kruppa, Jochen; Liu, Yufeng; Biau, Gérard; Kohler, Michael; König, Inke R; Malley, James D; Ziegler, Andreas
2014-07-01
Probability estimation for binary and multicategory outcome using logistic and multinomial logistic regression has a long-standing tradition in biostatistics. However, biases may occur if the model is misspecified. In contrast, outcome probabilities for individuals can be estimated consistently with machine learning approaches, including k-nearest neighbors (k-NN), bagged nearest neighbors (b-NN), random forests (RF), and support vector machines (SVM). Because machine learning methods are rarely used by applied biostatisticians, the primary goal of this paper is to explain the concept of probability estimation with these methods and to summarize recent theoretical findings. Probability estimation in k-NN, b-NN, and RF can be embedded into the class of nonparametric regression learning machines; therefore, we start with the construction of nonparametric regression estimates and review results on consistency and rates of convergence. In SVMs, outcome probabilities for individuals are estimated consistently by repeatedly solving classification problems. For SVMs we review classification problem and then dichotomous probability estimation. Next we extend the algorithms for estimating probabilities using k-NN, b-NN, and RF to multicategory outcomes and discuss approaches for the multicategory probability estimation problem using SVM. In simulation studies for dichotomous and multicategory dependent variables we demonstrate the general validity of the machine learning methods and compare it with logistic regression. However, each method fails in at least one simulation scenario. We conclude with a discussion of the failures and give recommendations for selecting and tuning the methods. Applications to real data and example code are provided in a companion article (doi:10.1002/bimj.201300077). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†
Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia
2015-01-01
Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144
Bezminabadi, Sina Norouzi; Ramezanzadeh, Ahmad; Esmaeil Jalali, Seyed-Mohammad; Tokhmechi, Behzad; Roustaei, Abbas
2017-03-01
Rate of penetration (ROP) is one of the key indicators of drilling operation performance. The estimation of ROP in drilling engineering is very important in terms of more accurate assessment of drilling time which affects operation costs. Hence, estimation of a ROP model using operational and environmental parameters is crucial. For this purpose, firstly physical and mechanical properties of rock were derived from well logs. Correlation between the pair data were determined to find influential parameters on ROP. A new ROP model has been developed in one of the Azadegan oil field wells in southwest of Iran. The model has been simulated using Multiple Nonlinear Regression (MNR) and Artificial Neural Network (ANN). By adding the rock properties, the estimation of the models were precisely improved. The results of simulation using MNR and ANN methods showed correlation coefficients of 0.62 and 0.87, respectively. It was concluded that the performance of ANN model in ROP prediction is fairly better than MNR method.
Adha, Kurniawan; Yusoff, Wan Ismail Wan; Almanna Lubis, Luluan
2017-10-01
Determining the pore pressure data and overpressure zone is a compulsory part of oil and gas exploration in which the data can enhance the safety with profit and preventing the drilling hazards. Investigation of thermophysical parameters such as temperature and thermal conductivity can enhance the pore pressure estimation for overpressure mechanism determination. Since those parameters are dependent on rock properties, it may reflect the changes on the column of thermophysical parameters when there is abnormally in pore pressure. The study was conducted in “MRI 1” well offshore Sarawak, where a new approach method designed to determine the overpressure generation. The study was insisted the contribution of thermophysical parameters for supporting the velocity analysis method, petrophysical analysis were done in these studies. Four thermal facies were identified along the well. The overpressure developed below the thermal facies 4, where the pressure reached 38 Mpa and temperature was increasing significantly. The velocity and the thermal conductivity cross plots shows a linear relationship since the both parameters mainly are the function of the rock compaction. When the rock more compact, the particles were brought closer into contact and making the sound wave going faster while the thermal conductivity were increasing. In addition, the increment of temperature and high heat flow indicated the presence of fluid expansion mechanism. Since the shale sonic velocity and density analysis were the common methods in overpressure mechanism and pore pressure estimation. As the addition parameters for determining overpressure zone, the presence of thermophysical analysis was enhancing the current method, where the current method was the single function of velocity analysis. The presence of thermophysical analysis will improve the understanding in overpressure mechanism determination as the new input parameters. Thus, integrated of thermophysical technique and velocity
Statistical Methods for Estimating the Uncertainty in the Best Basis Inventories
International Nuclear Information System (INIS)
WILMARTH, S.R.
2000-01-01
This document describes the statistical methods used to determine sample-based uncertainty estimates for the Best Basis Inventory (BBI). For each waste phase, the equation for the inventory of an analyte in a tank is Inventory (Kg or Ci) = Concentration x Density x Waste Volume. the total inventory is the sum of the inventories in the different waste phases. Using tanks sample data: statistical methods are used to obtain estimates of the mean concentration of an analyte the density of the waste, and their standard deviations. The volumes of waste in the different phases, and their standard deviations, are estimated based on other types of data. The three estimates are multiplied to obtain the inventory estimate. The standard deviations are combined to obtain a standard deviation of the inventory. The uncertainty estimate for the Best Basis Inventory (BBI) is the approximate 95% confidence interval on the inventory
A pose estimation method for unmanned ground vehicles in GPS denied environments
Tamjidi, Amirhossein; Ye, Cang
2012-06-01
This paper presents a pose estimation method based on the 1-Point RANSAC EKF (Extended Kalman Filter) framework. The method fuses the depth data from a LIDAR and the visual data from a monocular camera to estimate the pose of a Unmanned Ground Vehicle (UGV) in a GPS denied environment. Its estimation framework continuy updates the vehicle's 6D pose state and temporary estimates of the extracted visual features' 3D positions. In contrast to the conventional EKF-SLAM (Simultaneous Localization And Mapping) frameworks, the proposed method discards feature estimates from the extended state vector once they are no longer observed for several steps. As a result, the extended state vector always maintains a reasonable size that is suitable for online calculation. The fusion of laser and visual data is performed both in the feature initialization part of the EKF-SLAM process and in the motion prediction stage. A RANSAC pose calculation procedure is devised to produce pose estimate for the motion model. The proposed method has been successfully tested on the Ford campus's LIDAR-Vision dataset. The results are compared with the ground truth data of the dataset and the estimation error is ~1.9% of the path length.
Methods for Estimation of Market Power in Electric Power Industry
Turcik, M.; Oleinikova, I.; Junghans, G.; Kolcun, M.
2012-01-01
The article is related to a topical issue of the newly-arisen market power phenomenon in the electric power industry. The authors point out to the importance of effective instruments and methods for credible estimation of the market power on liberalized electricity market as well as the forms and consequences of market power abuse. The fundamental principles and methods of the market power estimation are given along with the most common relevant indicators. Furthermore, in the work a proposal for determination of the relevant market place taking into account the specific features of power system and a theoretical example of estimating the residual supply index (RSI) in the electricity market are given.
M-Arctan estimator based on the trust-region method
Energy Technology Data Exchange (ETDEWEB)
Hassaine, Yacine; Delourme, Benoit; Panciatici, Patrick [Gestionnaire du Reseau de Transport d Electricite Departement Methodes et appui Immeuble Le Colbert 9, Versailles Cedex (France); Walter, Eric [Laboratoire des signaux et systemes (L2S) Supelec, Gif-sur-Yvette (France)
2006-11-15
In this paper a new approach is proposed to increase the robustness of the classical L{sub 2}-norm state estimation. To achieve this task a new formulation of the Levemberg-Marquardt algorithm based on the trust-region method is applied to a new M-estimator, which we called M-Arctan. Results obtained on IEEE networks up to 300 buses are presented. (author)
Methods for design flood estimation in South Africa | Smithers ...
African Journals Online (AJOL)
The estimation of design floods is necessary for the design of hydraulic structures and to quantify the risk of failure of the structures. Most of the methods used for design flood estimation in South Africa were developed in the late 1960s and early 1970s and are in need of updating with more than 40 years of additional data ...
Assessment of Methods for Estimating Risk to Birds from ...
The U.S. EPA Ecological Risk Assessment Support Center (ERASC) announced the release of the final report entitled, Assessment of Methods for Estimating Risk to Birds from Ingestion of Contaminated Grit Particles. This report evaluates approaches for estimating the probability of ingestion by birds of contaminated particles such as pesticide granules or lead particles (i.e. shot or bullet fragments). In addition, it presents an approach for using this information to estimate the risk of mortality to birds from ingestion of lead particles. Response to ERASC Request #16
Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan
2016-04-01
Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.
Sediment Curve Uncertainty Estimation Using GLUE and Bootstrap Methods
Directory of Open Access Journals (Sweden)
aboalhasan fathabadi
2017-02-01
Full Text Available Introduction: In order to implement watershed practices to decrease soil erosion effects it needs to estimate output sediment of watershed. Sediment rating curve is used as the most conventional tool to estimate sediment. Regarding to sampling errors and short data, there are some uncertainties in estimating sediment using sediment curve. In this research, bootstrap and the Generalized Likelihood Uncertainty Estimation (GLUE resampling techniques were used to calculate suspended sediment loads by using sediment rating curves. Materials and Methods: The total drainage area of the Sefidrood watershed is about 560000 km2. In this study uncertainty in suspended sediment rating curves was estimated in four stations including Motorkhane, Miyane Tonel Shomare 7, Stor and Glinak constructed on Ayghdamosh, Ghrangho, GHezelOzan and Shahrod rivers, respectively. Data were randomly divided into a training data set (80 percent and a test set (20 percent by Latin hypercube random sampling.Different suspended sediment rating curves equations were fitted to log-transformed values of sediment concentration and discharge and the best fit models were selected based on the lowest root mean square error (RMSE and the highest correlation of coefficient (R2. In the GLUE methodology, different parameter sets were sampled randomly from priori probability distribution. For each station using sampled parameter sets and selected suspended sediment rating curves equation suspended sediment concentration values were estimated several times (100000 to 400000 times. With respect to likelihood function and certain subjective threshold, parameter sets were divided into behavioral and non-behavioral parameter sets. Finally using behavioral parameter sets the 95% confidence intervals for suspended sediment concentration due to parameter uncertainty were estimated. In bootstrap methodology observed suspended sediment and discharge vectors were resampled with replacement B (set to
Resampling methods in Microsoft Excel® for estimating reference intervals.
Theodorsson, Elvar
2015-01-01
Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.
Own-wage labor supply elasticities: variation across time and estimation methods
Directory of Open Access Journals (Sweden)
Olivier Bargain
2016-10-01
Full Text Available Abstract There is a huge variation in the size of labor supply elasticities in the literature, which hampers policy analysis. While recent studies show that preference heterogeneity across countries explains little of this variation, we focus on two other important features: observation period and estimation method. We start with a thorough survey of existing evidence for both Western Europe and the USA, over a long period and from different empirical approaches. Then, our meta-analysis attempts to disentangle the role of time changes and estimation methods. We highlight the key role of time changes, documenting the incredible fall in labor supply elasticities since the 1980s not only for the USA but also in the EU. In contrast, we find no compelling evidence that the choice of estimation method explains variation in elasticity estimates. From our analysis, we derive important guidelines for policy simulations.
A new method for estimating UV fluxes at ground level in cloud-free conditions
Wandji Nyamsi, William; Pitkänen, Mikko R. A.; Aoun, Youva; Blanc, Philippe; Heikkilä, Anu; Lakkala, Kaisa; Bernhard, Germar; Koskela, Tapani; Lindfors, Anders V.; Arola, Antti; Wald, Lucien
2017-12-01
A new method has been developed to estimate the global and direct solar irradiance in the UV-A and UV-B at ground level in cloud-free conditions. It is based on a resampling technique applied to the results of the k-distribution method and the correlated-k approximation of Kato et al. (1999) over the UV band. Its inputs are the aerosol properties and total column ozone that are produced by the Copernicus Atmosphere Monitoring Service (CAMS). The estimates from this new method have been compared to instantaneous measurements of global UV irradiances made in cloud-free conditions at five stations at high latitudes in various climates. For the UV-A irradiance, the bias ranges between -0.8 W m-2 (-3 % of the mean of all data) and -0.2 W m-2 (-1 %). The root mean square error (RMSE) ranges from 1.1 W m-2 (6 %) to 1.9 W m-2 (9 %). The coefficient of determination R2 is greater than 0.98. The bias for UV-B is between -0.04 W m-2 (-4 %) and 0.08 W m-2 (+13 %) and the RMSE is 0.1 W m-2 (between 12 and 18 %). R2 ranges between 0.97 and 0.99. This work demonstrates the quality of the proposed method combined with the CAMS products. Improvements, especially in the modeling of the reflectivity of the Earth's surface in the UV region, are necessary prior to its inclusion into an operational tool.
Methods to estimate historical daily streamflow for ungaged stream locations in Minnesota
Lorenz, David L.; Ziegeweid, Jeffrey R.
2016-03-14
Effective and responsible management of water resources relies on a thorough understanding of the quantity and quality of available water; however, streamgages cannot be installed at every location where streamflow information is needed. Therefore, methods for estimating streamflow at ungaged stream locations need to be developed. This report presents a statewide study to develop methods to estimate the structure of historical daily streamflow at ungaged stream locations in Minnesota. Historical daily mean streamflow at ungaged locations in Minnesota can be estimated by transferring streamflow data at streamgages to the ungaged location using the QPPQ method. The QPPQ method uses flow-duration curves at an index streamgage, relying on the assumption that exceedance probabilities are equivalent between the index streamgage and the ungaged location, and estimates the flow at the ungaged location using the estimated flow-duration curve. Flow-duration curves at ungaged locations can be estimated using recently developed regression equations that have been incorporated into StreamStats (http://streamstats.usgs.gov/), which is a U.S. Geological Survey Web-based interactive mapping tool that can be used to obtain streamflow statistics, drainage-basin characteristics, and other information for user-selected locations on streams.
Method for developing cost estimates for generic regulatory requirements
International Nuclear Information System (INIS)
1985-01-01
The NRC has established a practice of performing regulatory analyses, reflecting costs as well as benefits, of proposed new or revised generic requirements. A method had been developed to assist the NRC in preparing the types of cost estimates required for this purpose and for assigning priorities in the resolution of generic safety issues. The cost of a generic requirement is defined as the net present value of total lifetime cost incurred by the public, industry, and government in implementing the requirement for all affected plants. The method described here is for commercial light-water-reactor power plants. Estimating the cost for a generic requirement involves several steps: (1) identifying the activities that must be carried out to fully implement the requirement, (2) defining the work packages associated with the major activities, (3) identifying the individual elements of cost for each work package, (4) estimating the magnitude of each cost element, (5) aggregating individual plant costs over the plant lifetime, and (6) aggregating all plant costs and generic costs to produce a total, national, present value of lifetime cost for the requirement. The method developed addresses all six steps. In this paper, we discuss on the first three
Directory of Open Access Journals (Sweden)
Dengxiao Lang
2017-09-01
Full Text Available Potential evapotranspiration (PET is crucial for water resources assessment. In this regard, the FAO (Food and Agriculture Organization–Penman–Monteith method (PM is commonly recognized as a standard method for PET estimation. However, due to requirement of detailed meteorological data, the application of PM is often constrained in many regions. Under such circumstances, an alternative method with similar efficiency to that of PM needs to be identified. In this study, three radiation-based methods, Makkink (Mak, Abtew (Abt, and Priestley–Taylor (PT, and five temperature-based methods, Hargreaves–Samani (HS, Thornthwaite (Tho, Hamon (Ham, Linacre (Lin, and Blaney–Criddle (BC, were compared with PM at yearly and seasonal scale, using long-term (50 years data from 90 meteorology stations in southwest China. Indicators, viz. (videlicet Nash–Sutcliffe efficiency (NSE, relative error (Re, normalized root mean squared error (NRMSE, and coefficient of determination (R2 were used to evaluate the performance of PET estimations by the above-mentioned eight methods. The results showed that the performance of the methods in PET estimation varied among regions; HS, PT, and Abt overestimated PET, while others underestimated. In Sichuan basin, Mak, Abt and HS yielded similar estimations to that of PM, while, in Yun-Gui plateau, Abt, Mak, HS, and PT showed better performances. Mak performed the best in the east Tibetan Plateau at yearly and seasonal scale, while HS showed a good performance in summer and autumn. In the arid river valley, HS, Mak, and Abt performed better than the others. On the other hand, Tho, Ham, Lin, and BC could not be used to estimate PET in some regions. In general, radiation-based methods for PET estimation performed better than temperature-based methods among the selected methods in the study area. Among the radiation-based methods, Mak performed the best, while HS showed the best performance among the temperature
International Nuclear Information System (INIS)
Wu Jingqin.
1989-01-01
Yang Chizhong filtering and inferential measurement method is a new method used for variable statistics of ore deposits. In order to apply this theory to estimate the uranium ore reserves under the circumstances of regular or irregular prospecting grids, small ore bodies, less sampling points, and complex occurrence, the author has used this method to estimate the ore reserves in five ore bodies of two deposits and achieved satisfactory results. It is demonstrated that compared with the traditional block measurement method, this method is simple and clear in formula, convenient in application, rapid in calculation, accurate in results, less expensive, and high economic benefits. The procedure and experience in the application of this method and the preliminary evaluation of its results are mainly described
Comparing different methods for estimating radiation dose to the conceptus
Energy Technology Data Exchange (ETDEWEB)
Lopez-Rendon, X.; Dedulle, A. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); Walgraeve, M.S.; Woussen, S.; Zhang, G. [University Hospitals Leuven, Department of Radiology, Leuven (Belgium); Bosmans, H. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); University Hospitals Leuven, Department of Radiology, Leuven (Belgium); Zanca, F. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); GE Healthcare, Buc (France)
2017-02-15
To compare different methods available in the literature for estimating radiation dose to the conceptus (D{sub conceptus}) against a patient-specific Monte Carlo (MC) simulation and a commercial software package (CSP). Eight voxel models from abdominopelvic CT exams of pregnant patients were generated. D{sub conceptus} was calculated with an MC framework including patient-specific longitudinal tube current modulation (TCM). For the same patients, dose to the uterus, D{sub uterus}, was calculated as an alternative for D{sub conceptus}, with a CSP that uses a standard-size, non-pregnant phantom and a generic TCM curve. The percentage error between D{sub uterus} and D{sub conceptus} was studied. Dose to the conceptus and percent error with respect to D{sub conceptus} was also estimated for three methods in the literature. The percentage error ranged from -15.9% to 40.0% when comparing MC to CSP. When comparing the TCM profiles with the generic TCM profile from the CSP, differences were observed due to patient habitus and conceptus position. For the other methods, the percentage error ranged from -30.1% to 13.5% but applicability was limited. Estimating an accurate D{sub conceptus} requires a patient-specific approach that the CSP investigated cannot provide. Available methods in the literature can provide a better estimation if applicable to patient-specific cases. (orig.)
Estimation of Lithological Classification in Taipei Basin: A Bayesian Maximum Entropy Method
Wu, Meng-Ting; Lin, Yuan-Chien; Yu, Hwa-Lung
2015-04-01
In environmental or other scientific applications, we must have a certain understanding of geological lithological composition. Because of restrictions of real conditions, only limited amount of data can be acquired. To find out the lithological distribution in the study area, many spatial statistical methods used to estimate the lithological composition on unsampled points or grids. This study applied the Bayesian Maximum Entropy (BME method), which is an emerging method of the geological spatiotemporal statistics field. The BME method can identify the spatiotemporal correlation of the data, and combine not only the hard data but the soft data to improve estimation. The data of lithological classification is discrete categorical data. Therefore, this research applied Categorical BME to establish a complete three-dimensional Lithological estimation model. Apply the limited hard data from the cores and the soft data generated from the geological dating data and the virtual wells to estimate the three-dimensional lithological classification in Taipei Basin. Keywords: Categorical Bayesian Maximum Entropy method, Lithological Classification, Hydrogeological Setting
An automated background estimation procedure for gamma ray spectra
International Nuclear Information System (INIS)
Tervo, R.J.; Kennett, T.J.; Prestwich, W.V.
1983-01-01
An objective and simple method has been developed to estimate the background continuum in Ge gamma ray spectra. Requiring no special procedures, the method is readily automated. Based upon the inherent statistical properties of the experimental data itself, nodes, which reflect background samples are located and used to produce an estimate of the continuum. A simple procedure to interpolate between nodes is reported and a range of rather typical experimental data is presented. All information necessary to implemented this technique is given including the relevant properties of various factors involved in its development. (orig.)
Sinusoidal Order Estimation Using Angles between Subspaces
Directory of Open Access Journals (Sweden)
Søren Holdt Jensen
2009-01-01
Full Text Available We consider the problem of determining the order of a parametric model from a noisy signal based on the geometry of the space. More specifically, we do this using the nontrivial angles between the candidate signal subspace model and the noise subspace. The proposed principle is closely related to the subspace orthogonality property known from the MUSIC algorithm, and we study its properties and compare it to other related measures. For the problem of estimating the number of complex sinusoids in white noise, a computationally efficient implementation exists, and this problem is therefore considered in detail. In computer simulations, we compare the proposed method to various well-known methods for order estimation. These show that the proposed method outperforms the other previously published subspace methods and that it is more robust to the noise being colored than the previously published methods.
Evaluation and comparison of estimation methods for failure rates and probabilities
Energy Technology Data Exchange (ETDEWEB)
Vaurio, Jussi K. [Fortum Power and Heat Oy, P.O. Box 23, 07901 Loviisa (Finland)]. E-mail: jussi.vaurio@fortum.com; Jaenkaelae, Kalle E. [Fortum Nuclear Services, P.O. Box 10, 00048 Fortum (Finland)
2006-02-01
An updated parametric robust empirical Bayes (PREB) estimation methodology is presented as an alternative to several two-stage Bayesian methods used to assimilate failure data from multiple units or plants. PREB is based on prior-moment matching and avoids multi-dimensional numerical integrations. The PREB method is presented for failure-truncated and time-truncated data. Erlangian and Poisson likelihoods with gamma prior are used for failure rate estimation, and Binomial data with beta prior are used for failure probability per demand estimation. Combined models and assessment uncertainties are accounted for. One objective is to compare several methods with numerical examples and show that PREB works as well if not better than the alternative more complex methods, especially in demanding problems of small samples, identical data and zero failures. False claims and misconceptions are straightened out, and practical applications in risk studies are presented.
Comparison of estimation methods for fitting weibull distribution to ...
African Journals Online (AJOL)
Comparison of estimation methods for fitting weibull distribution to the natural stand of Oluwa Forest Reserve, Ondo State, Nigeria. ... Journal of Research in Forestry, Wildlife and Environment ... The result revealed that maximum likelihood method was more accurate in fitting the Weibull distribution to the natural stand.
Fill rate estimation in periodic review policies with lost sales using simple methods
Energy Technology Data Exchange (ETDEWEB)
Cardós, M.; Guijarro Tarradellas, E.; Babiloni Griñón, E.
2016-07-01
Purpose: The exact estimation of the fill rate in the lost sales case is complex and time consuming. However, simple and suitable methods are needed for its estimation so that inventory managers could use them. Design/methodology/approach: Instead of trying to compute the fill rate in one step, this paper focuses first on estimating the probabilities of different on-hand stock levels so that the fill rate is computed later. Findings: As a result, the performance of a novel proposed method overcomes the other methods and is relatively simple to compute. Originality/value: Existing methods for estimating stock levels are examined, new procedures are proposed and their performance is assessed.
Dental age estimation using Willems method: A digital orthopantomographic study
Directory of Open Access Journals (Sweden)
Rezwana Begum Mohammed
2014-01-01
Full Text Available In recent years, age estimation has become increasingly important in living people for a variety of reasons, including identifying criminal and legal responsibility, and for many other social events such as a birth certificate, marriage, beginning a job, joining the army, and retirement. Objectives: The aim of this study was to assess the developmental stages of left seven mandibular teeth for estimation of dental age (DA in different age groups and to evaluate the possible correlation between DA and chronological age (CA in South Indian population using Willems method. Materials and Methods: Digital Orthopantomogram of 332 subjects (166 males, 166 females who fit the study and the criteria were obtained. Assessment of mandibular teeth (from central incisor to the second molar on left quadrant development was undertaken and DA was assessed using Willems method. Results and Discussion: The present study showed a significant correlation between DA and CA in both males (r = 0.71 and females (r = 0.88. The overall mean difference between the estimated DA and CA for males was 0.69 ± 2.14 years (P 0.05. Willems method underestimated the mean age of males by 0.69 years and females by 0.08 years and showed that females mature earlier than males in selected population. The mean difference between DA and CA according to Willems method was 0.39 years and is statistically significant (P < 0.05. Conclusion: This study showed significant relation between DA and CA. Thus, digital radiographic assessment of mandibular teeth development can be used to generate mean DA using Willems method and also the estimated age range for an individual of unknown CA.
Method to Locate Contaminant Source and Estimate Emission Strength
Directory of Open Access Journals (Sweden)
Qu Hongquan
2013-01-01
Full Text Available People greatly concern the issue of air quality in some confined spaces, such as spacecraft, aircraft, and submarine. With the increase of residence time in such confined space, contaminant pollution has become a main factor which endangers life. It is urgent to identify a contaminant source rapidly so that a prompt remedial action can be taken. A procedure of source identification should be able to locate the position and to estimate the emission strength of the contaminant source. In this paper, an identification method was developed to realize these two aims. This method was developed based on a discrete concentration stochastic model. With this model, a sensitivity analysis algorithm was induced to locate the source position, and a Kalman filter was used to further estimate the contaminant emission strength. This method could track and predict the source strength dynamically. Meanwhile, it can predict the distribution of contaminant concentration. Simulation results have shown the virtues of the method.
Rosado-Mendez, Ivan M; Nam, Kibo; Hall, Timothy J; Zagzebski, James A
2013-07-01
Reported here is a phantom-based comparison of methods for determining the power spectral density (PSD) of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing α(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law α(f)= α0 f (β), was estimated using a reference phantom method. The power spectral density was estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter-estimation region. Errors were quantified by the bias and standard deviation of the α0 and β estimates, and by the overall power-law fit error (FE). For parameter estimation regions larger than ~34 pulse lengths (~1 cm for this experiment), an overall power-law FE of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the α0 and β estimates depended on the size of the parameter estimation region. Here, the multitaper method reduced the standard deviation of the α0 and β estimates compared with those using the other techniques. The results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound methods.
Groundwater Seepage Estimation into Amirkabir Tunnel Using Analytical Methods and DEM and SGR Method
Hadi Farhadian; Homayoon Katibeh
2015-01-01
In this paper, groundwater seepage into Amirkabir tunnel has been estimated using analytical and numerical methods for 14 different sections of the tunnel. Site Groundwater Rating (SGR) method also has been performed for qualitative and quantitative classification of the tunnel sections. The obtained results of above mentioned methods were compared together. The study shows reasonable accordance with results of the all methods unless for two sections of tunnel. In these t...
Numerical method for estimating the size of chaotic regions of phase space
International Nuclear Information System (INIS)
Henyey, F.S.; Pomphrey, N.
1987-10-01
A numerical method for estimating irregular volumes of phase space is derived. The estimate weights the irregular area on a surface of section with the average return time to the section. We illustrate the method by application to the stadium and oval billiard systems and also apply the method to the continuous Henon-Heiles system. 15 refs., 10 figs
An iterative stochastic ensemble method for parameter estimation of subsurface flow models
International Nuclear Information System (INIS)
Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim
2013-01-01
Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss–Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates
An iterative stochastic ensemble method for parameter estimation of subsurface flow models
Elsheikh, Ahmed H.
2013-06-01
Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss-Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier Inc.
Stress estimation in reservoirs using an integrated inverse method
Mazuyer, Antoine; Cupillard, Paul; Giot, Richard; Conin, Marianne; Leroy, Yves; Thore, Pierre
2018-05-01
Estimating the stress in reservoirs and their surroundings prior to the production is a key issue for reservoir management planning. In this study, we propose an integrated inverse method to estimate such initial stress state. The 3D stress state is constructed with the displacement-based finite element method assuming linear isotropic elasticity and small perturbations in the current geometry of the geological structures. The Neumann boundary conditions are defined as piecewise linear functions of depth. The discontinuous functions are determined with the CMA-ES (Covariance Matrix Adaptation Evolution Strategy) optimization algorithm to fit wellbore stress data deduced from leak-off tests and breakouts. The disregard of the geological history and the simplified rheological assumptions mean that only the stress field, statically admissible and matching the wellbore data should be exploited. The spatial domain of validity of this statement is assessed by comparing the stress estimations for a synthetic folded structure of finite amplitude with a history constructed assuming a viscous response.
Microwave remote sensing of soil moisture for estimation of profile soil property
International Nuclear Information System (INIS)
Mattikalli, N.M.; Engman, E.T.; Ahuja, L.R.; Jackson, T.J.
1998-01-01
Multi-temporal microwave remotely-sensed soil moisture has been utilized for the estimation of profile soil property, viz. the soil hydraulic conductivity. Passive microwave remote sensing was employed to collect daily soil moisture data across the Little Washita watershed, Oklahoma, during 10-18 June 1992. The ESTAR (Electronically Steered Thin Array Radiometer) instrument operating at L -band was flown on a NASA C-130 aircraft. Brightness temperature (TB) data collected at a ground resolution of 200m were employed to derive spatial distribution of surface soil moisture. Analysis of spatial and temporal soil moisture information in conjunction with soils data revealed a direct relation between changes in soil moisture and soil texture. A geographical information system (GIS) based analysis suggested that 2-days initial drainage of soil, measured from remote sensing, was related to an important soil hydraulic property viz. the saturated hydraulic conductivity (Ksat). A hydrologic modelling methodology was developed for estimation of Ksat of surface and sub-surface soil layers. Specifically, soil hydraulic parameters were optimized to obtain a good match between model estimated and field measured soil moisture profiles. Relations between 2-days soil moisture change and Ksat of 0-5 cm, 0-30 cm and 0-60cm depths yielded correla tions of 0.78, 0.82 and 0.71, respectively. These results are comparable to the findings of previous studies involving laboratory-controlled experiments and numerical simulations, and support their extension to the field conditions of the Little Washita watershed. These findings have potential applications of microwave remote sensing to obtain 2-days of soil moisture and then to quickly estimate the spatial distribution of Ksat over large areas. (author)
Oguchi, Masahiro; Fuse, Masaaki
2015-02-03
Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.
Methods for design flood estimation in South Africa
African Journals Online (AJOL)
2012-07-04
Jul 4, 2012 ... 1970s and are in need of updating with more than 40 years of additional data ... This paper reviews methods used for design flood estimation in South Africa and .... transposition of past experience, or a deterministic approach,.
Reliability of Estimation Pile Load Capacity Methods
Directory of Open Access Journals (Sweden)
Yudhi Lastiasih
2014-04-01
Full Text Available None of numerous previous methods for predicting pile capacity is known how accurate any of them are when compared with the actual ultimate capacity of piles tested to failure. The author’s of the present paper have conducted such an analysis, based on 130 data sets of field loading tests. Out of these 130 data sets, only 44 could be analysed, of which 15 were conducted until the piles actually reached failure. The pile prediction methods used were: Brinch Hansen’s method (1963, Chin’s method (1970, Decourt’s Extrapolation Method (1999, Mazurkiewicz’s method (1972, Van der Veen’s method (1953, and the Quadratic Hyperbolic Method proposed by Lastiasih et al. (2012. It was obtained that all the above methods were sufficiently reliable when applied to data from pile loading tests that loaded to reach failure. However, when applied to data from pile loading tests that loaded without reaching failure, the methods that yielded lower values for correction factor N are more recommended. Finally, the empirical method of Reese and O’Neill (1988 was found to be reliable enough to be used to estimate the Qult of a pile foundation based on soil data only.
Vehicle Speed Estimation and Forecasting Methods Based on Cellular Floating Vehicle Data
Directory of Open Access Journals (Sweden)
Wei-Kuang Lai
2016-02-01
Full Text Available Traffic information estimation and forecasting methods based on cellular floating vehicle data (CFVD are proposed to analyze the signals (e.g., handovers (HOs, call arrivals (CAs, normal location updates (NLUs and periodic location updates (PLUs from cellular networks. For traffic information estimation, analytic models are proposed to estimate the traffic flow in accordance with the amounts of HOs and NLUs and to estimate the traffic density in accordance with the amounts of CAs and PLUs. Then, the vehicle speeds can be estimated in accordance with the estimated traffic flows and estimated traffic densities. For vehicle speed forecasting, a back-propagation neural network algorithm is considered to predict the future vehicle speed in accordance with the current traffic information (i.e., the estimated vehicle speeds from CFVD. In the experimental environment, this study adopted the practical traffic information (i.e., traffic flow and vehicle speed from Taiwan Area National Freeway Bureau as the input characteristics of the traffic simulation program and referred to the mobile station (MS communication behaviors from Chunghwa Telecom to simulate the traffic information and communication records. The experimental results illustrated that the average accuracy of the vehicle speed forecasting method is 95.72%. Therefore, the proposed methods based on CFVD are suitable for an intelligent transportation system.
Estimates of the relative specific yield of aquifers from geo-electrical ...
African Journals Online (AJOL)
This paper discusses a method of estimating aquifer specific yield based on surface resistivity sounding measurements supplemented with data on water conductivity. The practical aim of the method is to suggest a parallel low cost method of estimating aquifer properties. The starting point is the Archie's law, which relates ...
A review of models and micrometeorological methods used to estimate wetland evapotranspiration
Drexler, J.Z.; Snyder, R.L.; Spano, D.; Paw, U.K.T.
2004-01-01
Within the past decade or so, the accuracy of evapotranspiration (ET) estimates has improved due to new and increasingly sophisticated methods. Yet despite a plethora of choices concerning methods, estimation of wetland ET remains insufficiently characterized due to the complexity of surface characteristics and the diversity of wetland types. In this review, we present models and micrometeorological methods that have been used to estimate wetland ET and discuss their suitability for particular wetland types. Hydrological, soil monitoring and lysimetric methods to determine ET are not discussed. Our review shows that, due to the variability and complexity of wetlands, there is no single approach that is the best for estimating wetland ET. Furthermore, there is no single foolproof method to obtain an accurate, independent measure of wetland ET. Because all of the methods reviewed, with the exception of eddy covariance and LIDAR, require measurements of net radiation (Rn) and soil heat flux (G), highly accurate measurements of these energy components are key to improving measurements of wetland ET. Many of the major methods used to determine ET can be applied successfully to wetlands of uniform vegetation and adequate fetch, however, certain caveats apply. For example, with accurate Rn and G data and small Bowen ratio (??) values, the Bowen ratio energy balance method can give accurate estimates of wetland ET. However, large errors in latent heat flux density can occur near sunrise and sunset when the Bowen ratio ?? ??? - 1??0. The eddy covariance method provides a direct measurement of latent heat flux density (??E) and sensible heat flux density (II), yet this method requires considerable expertise and expensive instrumentation to implement. A clear advantage of using the eddy covariance method is that ??E can be compared with Rn-G H, thereby allowing for an independent test of accuracy. The surface renewal method is inexpensive to replicate and, therefore, shows
Guideline for Bayesian Net based Software Fault Estimation Method for Reactor Protection System
International Nuclear Information System (INIS)
Eom, Heung Seop; Park, Gee Yong; Jang, Seung Cheol
2011-01-01
The purpose of this paper is to provide a preliminary guideline for the estimation of software faults in a safety-critical software, for example, reactor protection system's software. As the fault estimation method is based on Bayesian Net which intensively uses subjective probability and informal data, it is necessary to define formal procedure of the method to minimize the variability of the results. The guideline describes assumptions, limitations and uncertainties, and the product of the fault estimation method. The procedure for conducting a software fault-estimation method is then outlined, highlighting the major tasks involved. The contents of the guideline are based on our own experience and a review of research guidelines developed for a PSA
Reliability analysis based on a novel density estimation method for structures with correlations
Directory of Open Access Journals (Sweden)
Baoyu LI
2017-06-01
Full Text Available Estimating the Probability Density Function (PDF of the performance function is a direct way for structural reliability analysis, and the failure probability can be easily obtained by integration in the failure domain. However, efficiently estimating the PDF is still an urgent problem to be solved. The existing fractional moment based maximum entropy has provided a very advanced method for the PDF estimation, whereas the main shortcoming is that it limits the application of the reliability analysis method only to structures with independent inputs. While in fact, structures with correlated inputs always exist in engineering, thus this paper improves the maximum entropy method, and applies the Unscented Transformation (UT technique to compute the fractional moments of the performance function for structures with correlations, which is a very efficient moment estimation method for models with any inputs. The proposed method can precisely estimate the probability distributions of performance functions for structures with correlations. Besides, the number of function evaluations of the proposed method in reliability analysis, which is determined by UT, is really small. Several examples are employed to illustrate the accuracy and advantages of the proposed method.
Ridge Distance Estimation in Fingerprint Images: Algorithm and Performance Evaluation
Directory of Open Access Journals (Sweden)
Tian Jie
2004-01-01
Full Text Available It is important to estimate the ridge distance accurately, an intrinsic texture property of a fingerprint image. Up to now, only several articles have touched directly upon ridge distance estimation. Little has been published providing detailed evaluation of methods for ridge distance estimation, in particular, the traditional spectral analysis method applied in the frequency field. In this paper, a novel method on nonoverlap blocks, called the statistical method, is presented to estimate the ridge distance. Direct estimation ratio (DER and estimation accuracy (EA are defined and used as parameters along with time consumption (TC to evaluate performance of these two methods for ridge distance estimation. Based on comparison of performances of these two methods, a third hybrid method is developed to combine the merits of both methods. Experimental results indicate that DER is 44.7%, 63.8%, and 80.6%; EA is 84%, 93%, and 91%; and TC is , , and seconds, with the spectral analysis method, statistical method, and hybrid method, respectively.
Estimating Rooftop Suitability for PV: A Review of Methods, Patents, and Validation Techniques
Energy Technology Data Exchange (ETDEWEB)
Melius, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Ong, S. [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2013-12-01
A number of methods have been developed using remote sensing data to estimate rooftop area suitable for the installation of photovoltaics (PV) at various geospatial resolutions. This report reviews the literature and patents on methods for estimating rooftop-area appropriate for PV, including constant-value methods, manual selection methods, and GIS-based methods. This report also presents NREL's proposed method for estimating suitable rooftop area for PV using Light Detection and Ranging (LiDAR) data in conjunction with a GIS model to predict areas with appropriate slope, orientation, and sunlight. NREL's method is validated against solar installation data from New Jersey, Colorado, and California to compare modeled results to actual on-the-ground measurements.
Nuclear graphite wear properties and estimation of graphite dust production in HTR-10
Energy Technology Data Exchange (ETDEWEB)
Luo, Xiaowei, E-mail: xwluo@tsinghua.edu.cn; Wang, Xiaoxin; Shi, Li; Yu, Xiaoyu; Yu, Suyuan
2017-04-15
Highlights: • Graphite dust. • The wear properties of graphite. • Pebble bed. • High Temperature Gas-cooled Reactor. • Fuel element. - Abstract: The issue of the graphite dust has been a research focus for the safety of High Temperature Gas-cooled Reactors (HTGRs), especially for the pebble bed reactors. Most of the graphite dust is produced from the wear of fuel elements during cycling of fuel elements. However, due to the complexity of the motion of the fuel elements in the pebble bed, there is no systematic method developed to predict the amount the graphite dust in a pebble bed reactor. In this paper, the study of the flow of the fuel elements in the pebble bed was carried out. Both theoretical calculation and numerical analysis by Discrete Element Method (DEM) software PFC3D were conducted to obtain the normal forces and sliding distances of the fuel elements in pebble bed. The wearing theory was then integrated with PFC3D to estimate the amount of the graphite dust in a pebble bed reactor, 10 MW High Temperature gas-cooled test Reactor (HTR-10).
Asymptotic scaling properties and estimation of the generalized Hurst exponents in financial data
Buonocore, R. J.; Aste, T.; Di Matteo, T.
2017-04-01
We propose a method to measure the Hurst exponents of financial time series. The scaling of the absolute moments against the aggregation horizon of real financial processes and of both uniscaling and multiscaling synthetic processes converges asymptotically towards linearity in log-log scale. In light of this we found appropriate a modification of the usual scaling equation via the introduction of a filter function. We devised a measurement procedure which takes into account the presence of the filter function without the need of directly estimating it. We verified that the method is unbiased within the errors by applying it to synthetic time series with known scaling properties. Finally we show an application to empirical financial time series where we fit the measured scaling exponents via a second or a fourth degree polynomial, which, because of theoretical constraints, have respectively only one and two degrees of freedom. We found that on our data set there is not clear preference between the second or fourth degree polynomial. Moreover the study of the filter functions of each time series shows common patterns of convergence depending on the momentum degree.
Estimation of deuterium content in organic compounds by mass spectrometric methods
International Nuclear Information System (INIS)
Dave, S.M.; Goomer, N.C.
1979-01-01
Many organic sompounds are finding increasing importance in heavy water enrichment programme. New methods based on quantitative chemical conversion have been developed and standardized in for estimating deuterium contents of the exchanging organic molecules by mass spectrometry. The methods have been selected in such a way that the deuterium contents of both exchangeable as well as total hydrogens in the molecule can be conveniently estimated. (auth.)
NEW COMPLETENESS METHODS FOR ESTIMATING EXOPLANET DISCOVERIES BY DIRECT DETECTION
International Nuclear Information System (INIS)
Brown, Robert A.; Soummer, Remi
2010-01-01
We report on new methods for evaluating realistic observing programs that search stars for planets by direct imaging, where observations are selected from an optimized star list and stars can be observed multiple times. We show how these methods bring critical insight into the design of the mission and its instruments. These methods provide an estimate of the outcome of the observing program: the probability distribution of discoveries (detection and/or characterization) and an estimate of the occurrence rate of planets (η). We show that these parameters can be accurately estimated from a single mission simulation, without the need for a complete Monte Carlo mission simulation, and we prove the accuracy of this new approach. Our methods provide tools to define a mission for a particular science goal; for example, a mission can be defined by the expected number of discoveries and its confidence level. We detail how an optimized star list can be built and how successive observations can be selected. Our approach also provides other critical mission attributes, such as the number of stars expected to be searched and the probability of zero discoveries. Because these attributes depend strongly on the mission scale (telescope diameter, observing capabilities and constraints, mission lifetime, etc.), our methods are directly applicable to the design of such future missions and provide guidance to the mission and instrument design based on scientific performance. We illustrate our new methods with practical calculations and exploratory design reference missions for the James Webb Space Telescope (JWST) operating with a distant starshade to reduce scattered and diffracted starlight on the focal plane. We estimate that five habitable Earth-mass planets would be discovered and characterized with spectroscopy, with a probability of zero discoveries of 0.004, assuming a small fraction of JWST observing time (7%), η = 0.3, and 70 observing visits, limited by starshade fuel.
International Nuclear Information System (INIS)
Xu, Liang; Yuan, Jingqi
2015-01-01
Thermodynamic properties of the working fluid and the flue gas play an important role in the thermodynamic calculation for the boiler design and the operational optimization in power plants. In this study, a generic approach to online calculate the thermodynamic properties of the flue gas is proposed based on its composition estimation. It covers the full operation scope of the flue gas, including the two-phase state when the temperature becomes lower than the dew point. The composition of the flue gas is online estimated based on the routinely offline assays of the coal samples and the online measured oxygen mole fraction in the flue gas. The relative error of the proposed approach is found less than 1% when the standard data set of the dry and humid air and the typical flue gas is used for validation. Also, the sensitivity analysis of the individual component and the influence of the measurement error of the oxygen mole fraction on the thermodynamic properties of the flue gas are presented. - Highlights: • Flue gas thermodynamic properties in coal-fired power plants are online calculated. • Flue gas composition is online estimated using the measured oxygen mole fraction. • The proposed approach covers full operation scope, including two-phase flue gas. • Component sensitivity to the thermodynamic properties of flue gas is presented.
International Nuclear Information System (INIS)
Buendia, R; Seoane, F; Lindecrantz, K; Bosaeus, I; Gil-Pita, R; Johannsson, G; Ellegård, L; Ward, L C
2015-01-01
Determination of body fluids is a useful common practice in determination of disease mechanisms and treatments. Bioimpedance spectroscopy (BIS) methods are non-invasive, inexpensive and rapid alternatives to reference methods such as tracer dilution. However, they are indirect and their robustness and validity are unclear. In this article, state of the art methods are reviewed, their drawbacks identified and new methods are proposed. All methods were tested on a clinical database of patients receiving growth hormone replacement therapy. Results indicated that most BIS methods are similarly accurate (e.g. < 0.5 ± 3.0% mean percentage difference for total body water) for estimation of body fluids. A new model for calculation is proposed that performs equally well for all fluid compartments (total body water, extra- and intracellular water). It is suggested that the main source of error in extracellular water estimation is due to anisotropy, in total body water estimation to the uncertainty associated with intracellular resistivity and in determination of intracellular water a combination of both. (paper)
ESTIMATING RISK ON THE CAPITAL MARKET WITH VaR METHOD
Directory of Open Access Journals (Sweden)
Sinisa Bogdan
2015-06-01
Full Text Available The two basic questions that every investor tries to answer before investment are questions about predicting return and risk. Risk and return are generally considered two positively correlated sizes, during the growth of risk it is expected increase of return to compensate the higher risk. The quantification of risk in the capital market represents the current topic since occurrence of securities. Together with estimated future returns it represents starting point of any investment. In this study it is described the history of the emergence of VaR methods, usefulness in assessing the risks of financial assets. Three main Value at Risk (VaR methodologies are decribed and explained in detail: historical method, parametric method and Monte Carlo method. After the theoretical review of VaR methods it is estimated risk of liquid stocks and portfolio from the Croatian capital market with historical and parametric VaR method, after which the results were compared and explained.
Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods
Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.
2014-12-01
Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.
Gao, Lingli; Pan, Yudi
2018-05-01
The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.
Property Valuation: Integration of Methods and Determination of Depreciation
Tempelmans Plat, H.; Verhaegh, M.
2000-01-01
Property valuation up to now is a global guess. On the one hand we have the Investment Method which regards a property as just a sum of money, on the other hand we have the Contractor's Method which is based on the actual new construction costs of the building and the actual value of the land. Both
Small sample GEE estimation of regression parameters for longitudinal data.
Paul, Sudhir; Zhang, Xuemao
2014-09-28
Longitudinal (clustered) response data arise in many bio-statistical applications which, in general, cannot be assumed to be independent. Generalized estimating equation (GEE) is a widely used method to estimate marginal regression parameters for correlated responses. The advantage of the GEE is that the estimates of the regression parameters are asymptotically unbiased even if the correlation structure is misspecified, although their small sample properties are not known. In this paper, two bias adjusted GEE estimators of the regression parameters in longitudinal data are obtained when the number of subjects is small. One is based on a bias correction, and the other is based on a bias reduction. Simulations show that the performances of both the bias-corrected methods are similar in terms of bias, efficiency, coverage probability, average coverage length, impact of misspecification of correlation structure, and impact of cluster size on bias correction. Both these methods show superior properties over the GEE estimates for small samples. Further, analysis of data involving a small number of subjects also shows improvement in bias, MSE, standard error, and length of the confidence interval of the estimates by the two bias adjusted methods over the GEE estimates. For small to moderate sample sizes (N ≤50), either of the bias-corrected methods GEEBc and GEEBr can be used. However, the method GEEBc should be preferred over GEEBr, as the former is computationally easier. For large sample sizes, the GEE method can be used. Copyright © 2014 John Wiley & Sons, Ltd.
VHTRC experiment for verification test of H∞ reactivity estimation method
International Nuclear Information System (INIS)
Fujii, Yoshio; Suzuki, Katsuo; Akino, Fujiyoshi; Yamane, Tsuyoshi; Fujisaki, Shingo; Takeuchi, Motoyoshi; Ono, Toshihiko
1996-02-01
This experiment was performed at VHTRC to acquire the data for verifying the H∞ reactivity estimation method. In this report, the experimental method, the measuring circuits and data processing softwares are described in details. (author)
Effect of starch isolation method on properties of sweet potato starch
Directory of Open Access Journals (Sweden)
A. SURENDRA BABU
2014-08-01
Full Text Available Isolation method of starch with different agents influences starch properties, which provide attention for studying the most appropriate method for isolation of starch. In the present study sweet potato starch was isolated by Sodium metabisulphate (M1, Sodium chloride (M2, and Distilled water (M3 methods and these were assessed for functional, chemical, pasting and structural properties. M3 yielded the greatest recovery of starch (10.20%. Isolation methods significantly changed swelling power and pasting properties but starches exhibited similar chemical properties. Sweet potato starches possessed C-type diffraction pattern. Small size granules of 2.90 μm were noticed in SEM of M3 starch. A high degree positive correlation was found between ash, amylose, and total starch content. The study concluded that isolation methods brought changes in yield, pasting and structural properties of sweet potato starch.
Evaluation of Model Based State of Charge Estimation Methods for Lithium-Ion Batteries
Directory of Open Access Journals (Sweden)
Zhongyue Zou
2014-08-01
Full Text Available Four model-based State of Charge (SOC estimation methods for lithium-ion (Li-ion batteries are studied and evaluated in this paper. Different from existing literatures, this work evaluates different aspects of the SOC estimation, such as the estimation error distribution, the estimation rise time, the estimation time consumption, etc. The equivalent model of the battery is introduced and the state function of the model is deduced. The four model-based SOC estimation methods are analyzed first. Simulations and experiments are then established to evaluate the four methods. The urban dynamometer driving schedule (UDDS current profiles are applied to simulate the drive situations of an electrified vehicle, and a genetic algorithm is utilized to identify the model parameters to find the optimal parameters of the model of the Li-ion battery. The simulations with and without disturbance are carried out and the results are analyzed. A battery test workbench is established and a Li-ion battery is applied to test the hardware in a loop experiment. Experimental results are plotted and analyzed according to the four aspects to evaluate the four model-based SOC estimation methods.
Methods for estimation of internal dose of the public from dietary
International Nuclear Information System (INIS)
Zhu Hongda
1987-01-01
Following the issue of its Publication 26, ICRP has successively published its Publication 30 to meet the great changes and improvements made in the Basic Recommendations since July of 1979. In Part 1 of Publcation 30, ICRP recommended a new method for internal dose estimation and pressented some important data. In this report, comparison is made among methods for estimation of internal dose for the public from dietary. They include: (1) the new method suggested by ICRP; (2) the simple and convenient method using transfer factors under equilibrium conditions; (3) the methods based on the similarities of several radionuclides to their chemical analogs. It is concluded that the first method is better than the others and should be used from now on
Wang, Han; Nakamura, Haruki; Fukuda, Ikuo
2016-03-21
We performed extensive and strict tests for the reliability of the zero-multipole (summation) method (ZMM), which is a method for estimating the electrostatic interactions among charged particles in a classical physical system, by investigating a set of various physical quantities. This set covers a broad range of water properties, including the thermodynamic properties (pressure, excess chemical potential, constant volume/pressure heat capacity, isothermal compressibility, and thermal expansion coefficient), dielectric properties (dielectric constant and Kirkwood-G factor), dynamical properties (diffusion constant and viscosity), and the structural property (radial distribution function). We selected a bulk water system, the most important solvent, and applied the widely used TIP3P model to this test. In result, the ZMM works well for almost all cases, compared with the smooth particle mesh Ewald (SPME) method that was carefully optimized. In particular, at cut-off radius of 1.2 nm, the recommended choices of ZMM parameters for the TIP3P system are α ≤ 1 nm(-1) for the splitting parameter and l = 2 or l = 3 for the order of the multipole moment. We discussed the origin of the deviations of the ZMM and found that they are intimately related to the deviations of the equilibrated densities between the ZMM and SPME, while the magnitude of the density deviations is very small.
Energy Technology Data Exchange (ETDEWEB)
Wang, Han, E-mail: wang-han@iapcm.ac.cn [CAEP Software Center for High Performance Numerical Simulation, Huayuan Road 6, 100088 Beijing, China and Zuse Institute Berlin (ZIB), Berlin (Germany); Nakamura, Haruki [Institute for Protein Research, Osaka University, 3-2 Yamadaoka, Suita, Osaka 565-0871 (Japan); Fukuda, Ikuo, E-mail: ifukuda@protein.osaka-u.ac.jp [Institute for Protein Research, Osaka University, 3-2 Yamadaoka, Suita, Osaka 565-0871 (Japan); RIKEN (The Institute of Physical and Chemical Research), 2-1 Hirosawa, Wako, Saitama 351-0198 (Japan)
2016-03-21
We performed extensive and strict tests for the reliability of the zero-multipole (summation) method (ZMM), which is a method for estimating the electrostatic interactions among charged particles in a classical physical system, by investigating a set of various physical quantities. This set covers a broad range of water properties, including the thermodynamic properties (pressure, excess chemical potential, constant volume/pressure heat capacity, isothermal compressibility, and thermal expansion coefficient), dielectric properties (dielectric constant and Kirkwood-G factor), dynamical properties (diffusion constant and viscosity), and the structural property (radial distribution function). We selected a bulk water system, the most important solvent, and applied the widely used TIP3P model to this test. In result, the ZMM works well for almost all cases, compared with the smooth particle mesh Ewald (SPME) method that was carefully optimized. In particular, at cut-off radius of 1.2 nm, the recommended choices of ZMM parameters for the TIP3P system are α ≤ 1 nm{sup −1} for the splitting parameter and l = 2 or l = 3 for the order of the multipole moment. We discussed the origin of the deviations of the ZMM and found that they are intimately related to the deviations of the equilibrated densities between the ZMM and SPME, while the magnitude of the density deviations is very small.
A method for the estimation of dual transmissivities from slug tests
Wolny, Filip; Marciniak, Marek; Kaczmarek, Mariusz
2018-03-01
Aquifer homogeneity is usually assumed when interpreting the results of pumping and slug tests, although aquifers are essentially heterogeneous. The aim of this study is to present a method of determining the transmissivities of dual-permeability water-bearing formations based on slug tests such as the pressure-induced permeability test. A bi-exponential rate-of-rise curve is typically observed during many of these tests conducted in heterogeneous formations. The work involved analyzing curves deviating from the exponential rise recorded at the Belchatow Lignite Mine in central Poland, where a significant number of permeability tests have been conducted. In most cases, bi-exponential movement was observed in piezometers with a screen installed in layered sediments, each with a different hydraulic conductivity, or in fissured rock. The possibility to identify the flow properties of these geological formations was analyzed. For each piezometer installed in such formations, a set of two transmissivity values was calculated piecewise based on the interpretation algorithm of the pressure-induced permeability test—one value for the first (steeper) part of the obtained rate-of-rise curve, and a second value for the latter part of the curve. The results of transmissivity estimation for each piezometer are shown. The discussion presents the limitations of the interpretational method and suggests future modeling plans.
A review of instrumental variable estimators for Mendelian randomization.
Burgess, Stephen; Small, Dylan S; Thompson, Simon G
2017-10-01
Instrumental variable analysis is an approach for obtaining causal inferences on the effect of an exposure (risk factor) on an outcome from observational data. It has gained in popularity over the past decade with the use of genetic variants as instrumental variables, known as Mendelian randomization. An instrumental variable is associated with the exposure, but not associated with any confounder of the exposure-outcome association, nor is there any causal pathway from the instrumental variable to the outcome other than via the exposure. Under the assumption that a single instrumental variable or a set of instrumental variables for the exposure is available, the causal effect of the exposure on the outcome can be estimated. There are several methods available for instrumental variable estimation; we consider the ratio method, two-stage methods, likelihood-based methods, and semi-parametric methods. Techniques for obtaining statistical inferences and confidence intervals are presented. The statistical properties of estimates from these methods are compared, and practical advice is given about choosing a suitable analysis method. In particular, bias and coverage properties of estimators are considered, especially with weak instruments. Settings particularly relevant to Mendelian randomization are prioritized in the paper, notably the scenario of a continuous exposure and a continuous or binary outcome.
New Vehicle Detection Method with Aspect Ratio Estimation for Hypothesized Windows
Directory of Open Access Journals (Sweden)
Jisu Kim
2015-12-01
Full Text Available All kinds of vehicles have different ratios of width to height, which are called the aspect ratios. Most previous works, however, use a fixed aspect ratio for vehicle detection (VD. The use of a fixed vehicle aspect ratio for VD degrades the performance. Thus, the estimation of a vehicle aspect ratio is an important part of robust VD. Taking this idea into account, a new on-road vehicle detection system is proposed in this paper. The proposed method estimates the aspect ratio of the hypothesized windows to improve the VD performance. Our proposed method uses an Aggregate Channel Feature (ACF and a support vector machine (SVM to verify the hypothesized windows with the estimated aspect ratio. The contribution of this paper is threefold. First, the estimation of vehicle aspect ratio is inserted between the HG (hypothesis generation and the HV (hypothesis verification. Second, a simple HG method named a signed horizontal edge map is proposed to speed up VD. Third, a new measure is proposed to represent the overlapping ratio between the ground truth and the detection results. This new measure is used to show that the proposed method is better than previous works in terms of robust VD. Finally, the Pittsburgh dataset is used to verify the performance of the proposed method.
Energy Technology Data Exchange (ETDEWEB)
Telfeyan, Katherine Christina [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ware, Stuart Douglas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Reimus, Paul William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Birdsell, Kay Hanson [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-11-06
Diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged from 14 to 30%, and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.
Telfeyan, Katherine; Ware, S. Doug; Reimus, Paul W.; Birdsell, Kay H.
2018-02-01
Diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating effective matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of effective matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged from 14 to 30%, and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than effective matrix diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields effective matrix diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.
Estimation of arsenic in nail using silver diethyldithiocarbamate method
Directory of Open Access Journals (Sweden)
Habiba Akhter Bhuiyan
2015-08-01
Full Text Available Spectrophotometric method of arsenic estimation in nails has four steps: a washing of nails, b digestion of nails, c arsenic generation, and finally d reading absorbance using spectrophotometer. Although the method is a cheapest one, widely used and effective, it is time consuming, laborious and need caution while using four acids.
Performance of sampling methods to estimate log characteristics for wildlife.
Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton
2004-01-01
Accurate estimation of the characteristics of log resources, or coarse woody debris (CWD), is critical to effective management of wildlife and other forest resources. Despite the importance of logs as wildlife habitat, methods for sampling logs have traditionally focused on silvicultural and fire applications. These applications have emphasized estimates of log volume...
Investigation on method of estimating the excitation spectrum of vibration source
International Nuclear Information System (INIS)
Zhang Kun; Sun Lei; Lin Song
2010-01-01
In practical engineer area, it is hard to obtain the excitation spectrum of the auxiliary machines of nuclear reactor through direct measurement. To solve this problem, the general method of estimating the excitation spectrum of vibration source through indirect measurement is proposed. First, the dynamic transfer matrix between the virtual excitation points and the measure points is obtained through experiment. The matrix combined with the response spectrum at the measure points under practical work condition can be used to calculate the excitation spectrum acts on the virtual excitation points. Then a simplified method is proposed which is based on the assumption that the vibration machine can be regarded as rigid body. The method treats the centroid as the excitation point and the dynamic transfer matrix is derived by using the sub structure mobility synthesis method. Thus, the excitation spectrum can be obtained by the inverse of the transfer matrix combined with the response spectrum at the measure points. Based on the above method, a computing example is carried out to estimate the excitation spectrum acts on the centroid of a electrical pump. By comparing the input excitation and the estimated excitation, the reliability of this method is verified. (authors)
A probabilistic method for testing and estimating selection differences between populations.
He, Yungang; Wang, Minxian; Huang, Xin; Li, Ran; Xu, Hongyang; Xu, Shuhua; Jin, Li
2015-12-01
Human populations around the world encounter various environmental challenges and, consequently, develop genetic adaptations to different selection forces. Identifying the differences in natural selection between populations is critical for understanding the roles of specific genetic variants in evolutionary adaptation. Although numerous methods have been developed to detect genetic loci under recent directional selection, a probabilistic solution for testing and quantifying selection differences between populations is lacking. Here we report the development of a probabilistic method for testing and estimating selection differences between populations. By use of a probabilistic model of genetic drift and selection, we showed that logarithm odds ratios of allele frequencies provide estimates of the differences in selection coefficients between populations. The estimates approximate a normal distribution, and variance can be estimated using genome-wide variants. This allows us to quantify differences in selection coefficients and to determine the confidence intervals of the estimate. Our work also revealed the link between genetic association testing and hypothesis testing of selection differences. It therefore supplies a solution for hypothesis testing of selection differences. This method was applied to a genome-wide data analysis of Han and Tibetan populations. The results confirmed that both the EPAS1 and EGLN1 genes are under statistically different selection in Han and Tibetan populations. We further estimated differences in the selection coefficients for genetic variants involved in melanin formation and determined their confidence intervals between continental population groups. Application of the method to empirical data demonstrated the outstanding capability of this novel approach for testing and quantifying differences in natural selection. © 2015 He et al.; Published by Cold Spring Harbor Laboratory Press.
Cardiff, Michael; Barrash, Warren; Thoma, Michael; Malama, Bwalya
2011-06-01
SummaryA recently developed unified model for partially-penetrating slug tests in unconfined aquifers ( Malama et al., in press) provides a semi-analytical solution for aquifer response at the wellbore in the presence of inertial effects and wellbore skin, and is able to model the full range of responses from overdamped/monotonic to underdamped/oscillatory. While the model provides a unifying framework for realistically analyzing slug tests in aquifers (with the ultimate goal of determining aquifer properties such as hydraulic conductivity K and specific storage Ss), it is currently unclear whether parameters of this model can be well-identified without significant prior information and, thus, what degree of information content can be expected from such slug tests. In this paper, we examine the information content of slug tests in realistic field scenarios with respect to estimating aquifer properties, through analysis of both numerical experiments and field datasets. First, through numerical experiments using Markov Chain Monte Carlo methods for gauging parameter uncertainty and identifiability, we find that: (1) as noted by previous researchers, estimation of aquifer storage parameters using slug test data is highly unreliable and subject to significant uncertainty; (2) joint estimation of aquifer and skin parameters contributes to significant uncertainty in both unless prior knowledge is available; and (3) similarly, without prior information joint estimation of both aquifer radial and vertical conductivity may be unreliable. These results have significant implications for the types of information that must be collected prior to slug test analysis in order to obtain reliable aquifer parameter estimates. For example, plausible estimates of aquifer anisotropy ratios and bounds on wellbore skin K should be obtained, if possible, a priori. Secondly, through analysis of field data - consisting of over 2500 records from partially-penetrating slug tests in a
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.
Fan, Jianqing; Xue, Lingzhou; Zou, Hui
2014-06-01
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.
Methods of multicriterion estimations in system total quality management
Directory of Open Access Journals (Sweden)
Nikolay V. Diligenskiy
2011-05-01
Full Text Available In this article the method of multicriterion comparative estimation of efficiency (Data Envelopment Analysis and possibility of its application in system of total quality management is considered.
Research on the Method of Noise Error Estimation of Atomic Clocks
Song, H. J.; Dong, S. W.; Li, W.; Zhang, J. H.; Jing, Y. J.
2017-05-01
The simulation methods of different noises of atomic clocks are given. The frequency flicker noise of atomic clock is studied by using the Markov process theory. The method for estimating the maximum interval error of the frequency white noise is studied by using the Wiener process theory. Based on the operation of 9 cesium atomic clocks in the time frequency reference laboratory of NTSC (National Time Service Center), the noise coefficients of the power-law spectrum model are estimated, and the simulations are carried out according to the noise models. Finally, the maximum interval error estimates of the frequency white noises generated by the 9 cesium atomic clocks have been acquired.
Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method
International Nuclear Information System (INIS)
Norris, Edward T.; Liu, Xin; Hsieh, Jiang
2015-01-01
Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. The CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer
Estimation of piezoelastic and viscoelastic properties in laminated structures
DEFF Research Database (Denmark)
Araujo, A. L.; Soares, C. M. Mota; Herskovits, J.
2009-01-01
An inverse method for material parameter estimation of elastic, piezoelectric and viscoelastic laminated plate structures is presented. The method uses a gradient based optimization technique in order to solve the inverse problem, through minimization of an error functional which expresses...... the difference between experimental free vibration data and corresponding numerical data produced by a finite element model. The complex modulus approach is used to model the viscoelastic material behavior, assuming hysteretic type damping. Applications that illustrate the influence of adhesive material...
Methods to estimate breeding values in honey bees
Brascamp, E.W.; Bijma, P.
2014-01-01
Background Efficient methodologies based on animal models are widely used to estimate breeding values in farm animals. These methods are not applicable in honey bees because of their mode of reproduction. Observations are recorded on colonies, which consist of a single queen and thousands of workers
Human body mass estimation: a comparison of "morphometric" and "mechanical" methods.
Auerbach, Benjamin M; Ruff, Christopher B
2004-12-01
In the past, body mass was reconstructed from hominin skeletal remains using both "mechanical" methods which rely on the support of body mass by weight-bearing skeletal elements, and "morphometric" methods which reconstruct body mass through direct assessment of body size and shape. A previous comparison of two such techniques, using femoral head breadth (mechanical) and stature and bi-iliac breadth (morphometric), indicated a good general correspondence between them (Ruff et al. [1997] Nature 387:173-176). However, the two techniques were never systematically compared across a large group of modern humans of diverse body form. This study incorporates skeletal measures taken from 1,173 Holocene adult individuals, representing diverse geographic origins, body sizes, and body shapes. Femoral head breadth, bi-iliac breadth (after pelvic rearticulation), and long bone lengths were measured on each individual. Statures were estimated from long bone lengths using appropriate reference samples. Body masses were calculated using three available femoral head breadth (FH) formulae and the stature/bi-iliac breadth (STBIB) formula, and compared. All methods yielded similar results. Correlations between FH estimates and STBIB estimates are 0.74-0.81. Slight differences in results between the three FH estimates can be attributed to sampling differences in the original reference samples, and in particular, the body-size ranges included in those samples. There is no evidence for systematic differences in results due to differences in body proportions. Since the STBIB method was validated on other samples, and the FH methods produced similar estimates, this argues that either may be applied to skeletal remains with some confidence. 2004 Wiley-Liss, Inc.
Directory of Open Access Journals (Sweden)
V.Ya. Nusinov
2017-08-01
Full Text Available The research determines that the current existing methods of enterprise’s economic potential estimation are based on the use of additive, multiplicative and rating models. It is determined that the existing methods have a row of defects. For example, not all the methods take into account the branch features of the analysis, and also the level of development of the enterprise comparatively with other enterprises. It is suggested to level such defects by an account at the estimation of potential integral level not only by branch features of enterprises activity but also by the intra-account economic clusterization of such enterprises. Scientific works which are connected with the using of clusters for the estimation of economic potential are generalized. According to the results of generalization it is determined that it is possible to distinguish 9 scientific approaches in this direction: the use of natural clusterization of enterprises with the purpose of estimation and increase of region potential; the use of natural clusterization of enterprises with the purpose of estimation and increase of industry potential; use of artificial clusterization of enterprises with the purpose of estimation and increase of region potential; use of artificial clusterization of enterprises with the purpose of estimation and increase of industry potential; the use of artificial clusterization of enterprises with the purpose of clustering potential estimation; the use of artificial clusterization of enterprises with the purpose of estimation of clustering competitiveness potential; the use of natural (artificial clusterization for the estimation of clustering efficiency; the use of natural (artificial clusterization for the increase of level at region (industries development; the use of methods of economic potential of region (industries estimation or its constituents for the construction of the clusters. It is determined that the use of clusterization method in
A Qualitative Method to Estimate HSI Display Complexity
International Nuclear Information System (INIS)
Hugo, Jacques; Gertman, David
2013-01-01
There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation
A Survey of Methods for Computing Best Estimates of Endoatmospheric and Exoatmospheric Trajectories
Bernard, William P.
2018-01-01
Beginning with the mathematical prediction of planetary orbits in the early seventeenth century up through the most recent developments in sensor fusion methods, many techniques have emerged that can be employed on the problem of endo and exoatmospheric trajectory estimation. Although early methods were ad hoc, the twentieth century saw the emergence of many systematic approaches to estimation theory that produced a wealth of useful techniques. The broad genesis of estimation theory has resulted in an equally broad array of mathematical principles, methods and vocabulary. Among the fundamental ideas and methods that are briefly touched on are batch and sequential processing, smoothing, estimation, and prediction, sensor fusion, sensor fusion architectures, data association, Bayesian and non Bayesian filtering, the family of Kalman filters, models of the dynamics of the phases of a rocket's flight, and asynchronous, delayed, and asequent data. Along the way, a few trajectory estimation issues are addressed and much of the vocabulary is defined.
An Economical Approach to Estimate a Benchmark Capital Stock. An Optimal Consistency Method
Jose Miguel Albala-Bertrand
2003-01-01
There are alternative methods of estimating capital stock for a benchmark year. However, these methods are costly and time-consuming, requiring the gathering of much basic information as well as the use of some convenient assumptions and guesses. In addition, a way is needed of checking whether the estimated benchmark is at the correct level. This paper proposes an optimal consistency method (OCM), which enables a capital stock to be estimated for a benchmark year, and which can also be used ...
An anti-disturbing real time pose estimation method and system
Zhou, Jian; Zhang, Xiao-hu
2011-08-01
Pose estimation relating two-dimensional (2D) images to three-dimensional (3D) rigid object need some known features to track. In practice, there are many algorithms which perform this task in high accuracy, but all of these algorithms suffer from features lost. This paper investigated the pose estimation when numbers of known features or even all of them were invisible. Firstly, known features were tracked to calculate pose in the current and the next image. Secondly, some unknown but good features to track were automatically detected in the current and the next image. Thirdly, those unknown features which were on the rigid and could match each other in the two images were retained. Because of the motion characteristic of the rigid object, the 3D information of those unknown features on the rigid could be solved by the rigid object's pose at the two moment and their 2D information in the two images except only two case: the first one was that both camera and object have no relative motion and camera parameter such as focus length, principle point, and etc. have no change at the two moment; the second one was that there was no shared scene or no matched feature in the two image. Finally, because those unknown features at the first time were known now, pose estimation could go on in the followed images in spite of the missing of known features in the beginning by repeating the process mentioned above. The robustness of pose estimation by different features detection algorithms such as Kanade-Lucas-Tomasi (KLT) feature, Scale Invariant Feature Transform (SIFT) and Speed Up Robust Feature (SURF) were compared and the compact of the different relative motion between camera and the rigid object were discussed in this paper. Graphic Processing Unit (GPU) parallel computing was also used to extract and to match hundreds of features for real time pose estimation which was hard to work on Central Processing Unit (CPU). Compared with other pose estimation methods, this new
Model-based estimation of finite population total in stratified sampling
African Journals Online (AJOL)
The work presented in this paper concerns the estimation of finite population total under model – based framework. Nonparametric regression approach as a method of estimating finite population total is explored. The asymptotic properties of the estimators based on nonparametric regression are also developed under ...
Asiri, Sharefa M.
2017-10-19
In this paper, a method based on modulating functions is proposed to estimate the Cerebral Blood Flow (CBF). The problem is written in an input estimation problem for a damped wave equation which is used to model the spatiotemporal variations of blood mass density. The method is described and its performance is assessed through some numerical simulations. The robustness of the method in presence of noise is also studied.
Estimating heat-to-heat variation in mechanical properties from a statistician's point of view
International Nuclear Information System (INIS)
Hebble, T.L.
1976-01-01
A statistical technique known as analysis of variance (ANOVA) is used to estimate the variance and standard deviation of differences among heats. The total variation of a collection of observations and how an ANOVA can be used to partition the total variation into its sources are discussed. Then, the ANOVA is adapted to published Japanese data indicating how to estimate heat-to-heat variation. Finally, numerical results are computed for several tensile and creep properties of Types 304 and 316 SS
Comparison of different methods for estimation of potential evapotranspiration
International Nuclear Information System (INIS)
Nazeer, M.
2010-01-01
Evapotranspiration can be estimated with different available methods. The aim of this research study to compare and evaluate the originally measured potential evapotranspiration from Class A pan with the Hargreaves equation, the Penman equation, the Penman-Montheith equation, and the FAO56 Penman-Monteith equation. The evaporation rate from pan recorded greater than stated methods. For each evapotranspiration method, results were compared against mean monthly potential evapotranspiration (PET) from Pan data according to FAO (ET/sub o/=K/sub pan X E/sub pan)), from daily measured recorded data of the twenty-five years (1984-2008). On the basis of statistical analysis between the pan data and the FAO56- Penman-Monteith method are not considered to be very significant (=0.98) at 95% confidence and prediction intervals. All methods required accurate weather data for precise results, for the purpose of this study the past twenty five years data were analyzed and used including maximum and minimum air temperature, relative humidity, wind speed, sunshine duration and rainfall. Based on linear regression analysis results the FAO56 PMM ranked first (R/sup 2/=0.98) followed by Hergreaves method (R/sup 2/=0.96), Penman-Monteith method (R/sup 2/=0.94) and Penman method (=0.93). Obviously, using FAO56 Penman Monteith method with precise climatic variables for ET/sub o/ estimation is more reliable than the other alternative methods, Hergreaves is more simple and rely only on air temperatures data and can be used alternative of FAO56 Penman-Monteith method if other climatic data are missing or unreliable. (author)
A new method of hybrid frequency hopping signals selection and blind parameter estimation
Zeng, Xiaoyu; Jiao, Wencheng; Sun, Huixian
2018-04-01
Frequency hopping communication is widely used in military communications at home and abroad. In the case of single-channel reception, it is scarce to process multiple frequency hopping signals both effectively and simultaneously. A method of hybrid FH signals selection and blind parameter estimation is proposed. The method makes use of spectral transformation, spectral entropy calculation and PRI transformation basic theory to realize the sorting and parameter estimation of the components in the hybrid frequency hopping signal. The simulation results show that this method can correctly classify the frequency hopping component signal, and the estimated error of the frequency hopping period is about 5% and the estimated error of the frequency hopping frequency is less than 1% when the SNR is 10dB. However, the performance of this method deteriorates seriously at low SNR.
PhySIC: a veto supertree method with desirable properties.
Ranwez, Vincent; Berry, Vincent; Criscuolo, Alexis; Fabre, Pierre-Henri; Guillemot, Sylvain; Scornavacca, Celine; Douzery, Emmanuel J P
2007-10-01
This paper focuses on veto supertree methods; i.e., methods that aim at producing a conservative synthesis of the relationships agreed upon by all source trees. We propose desirable properties that a supertree should satisfy in this framework, namely the non-contradiction property (PC) and the induction property (PI). The former requires that the supertree does not contain relationships that contradict one or a combination of the source topologies, whereas the latter requires that all topological information contained in the supertree is present in a source tree or collectively induced by several source trees. We provide simple examples to illustrate their relevance and that allow a comparison with previously advocated properties. We show that these properties can be checked in polynomial time for any given rooted supertree. Moreover, we introduce the PhySIC method (PHYlogenetic Signal with Induction and non-Contradiction). For k input trees spanning a set of n taxa, this method produces a supertree that satisfies the above-mentioned properties in O(kn(3) + n(4)) computing time. The polytomies of the produced supertree are also tagged by labels indicating areas of conflict as well as those with insufficient overlap. As a whole, PhySIC enables the user to quickly summarize consensual information of a set of trees and localize groups of taxa for which the data require consolidation. Lastly, we illustrate the behaviour of PhySIC on primate data sets of various sizes, and propose a supertree covering 95% of all primate extant genera. The PhySIC algorithm is available at http://atgc.lirmm.fr/cgi-bin/PhySIC.
Non-Destructive Lichen Biomass Estimation in Northwestern Alaska: A Comparison of Methods
Rosso, Abbey; Neitlich, Peter; Smith, Robert J.
2014-01-01
Terrestrial lichen biomass is an important indicator of forage availability for caribou in northern regions, and can indicate vegetation shifts due to climate change, air pollution or changes in vascular plant community structure. Techniques for estimating lichen biomass have traditionally required destructive harvesting that is painstaking and impractical, so we developed models to estimate biomass from relatively simple cover and height measurements. We measured cover and height of forage lichens (including single-taxon and multi-taxa “community” samples, n = 144) at 73 sites on the Seward Peninsula of northwestern Alaska, and harvested lichen biomass from the same plots. We assessed biomass-to-volume relationships using zero-intercept regressions, and compared differences among two non-destructive cover estimation methods (ocular vs. point count), among four landcover types in two ecoregions, and among single-taxon vs. multi-taxa samples. Additionally, we explored the feasibility of using lichen height (instead of volume) as a predictor of stand-level biomass. Although lichen taxa exhibited unique biomass and bulk density responses that varied significantly by growth form, we found that single-taxon sampling consistently under-estimated true biomass and was constrained by the need for taxonomic experts. We also found that the point count method provided little to no improvement over ocular methods, despite increased effort. Estimated biomass of lichen-dominated communities (mean lichen cover: 84.9±1.4%) using multi-taxa, ocular methods differed only nominally among landcover types within ecoregions (range: 822 to 1418 g m−2). Height alone was a poor predictor of lichen biomass and should always be weighted by cover abundance. We conclude that the multi-taxa (whole-community) approach, when paired with ocular estimates, is the most reasonable and practical method for estimating lichen biomass at landscape scales in northwest Alaska. PMID:25079228
Non-destructive lichen biomass estimation in northwestern Alaska: a comparison of methods.
Directory of Open Access Journals (Sweden)
Abbey Rosso
Full Text Available Terrestrial lichen biomass is an important indicator of forage availability for caribou in northern regions, and can indicate vegetation shifts due to climate change, air pollution or changes in vascular plant community structure. Techniques for estimating lichen biomass have traditionally required destructive harvesting that is painstaking and impractical, so we developed models to estimate biomass from relatively simple cover and height measurements. We measured cover and height of forage lichens (including single-taxon and multi-taxa "community" samples, n = 144 at 73 sites on the Seward Peninsula of northwestern Alaska, and harvested lichen biomass from the same plots. We assessed biomass-to-volume relationships using zero-intercept regressions, and compared differences among two non-destructive cover estimation methods (ocular vs. point count, among four landcover types in two ecoregions, and among single-taxon vs. multi-taxa samples. Additionally, we explored the feasibility of using lichen height (instead of volume as a predictor of stand-level biomass. Although lichen taxa exhibited unique biomass and bulk density responses that varied significantly by growth form, we found that single-taxon sampling consistently under-estimated true biomass and was constrained by the need for taxonomic experts. We also found that the point count method provided little to no improvement over ocular methods, despite increased effort. Estimated biomass of lichen-dominated communities (mean lichen cover: 84.9±1.4% using multi-taxa, ocular methods differed only nominally among landcover types within ecoregions (range: 822 to 1418 g m-2. Height alone was a poor predictor of lichen biomass and should always be weighted by cover abundance. We conclude that the multi-taxa (whole-community approach, when paired with ocular estimates, is the most reasonable and practical method for estimating lichen biomass at landscape scales in northwest Alaska.
Non-destructive lichen biomass estimation in northwestern Alaska: a comparison of methods.
Rosso, Abbey; Neitlich, Peter; Smith, Robert J
2014-01-01
Terrestrial lichen biomass is an important indicator of forage availability for caribou in northern regions, and can indicate vegetation shifts due to climate change, air pollution or changes in vascular plant community structure. Techniques for estimating lichen biomass have traditionally required destructive harvesting that is painstaking and impractical, so we developed models to estimate biomass from relatively simple cover and height measurements. We measured cover and height of forage lichens (including single-taxon and multi-taxa "community" samples, n = 144) at 73 sites on the Seward Peninsula of northwestern Alaska, and harvested lichen biomass from the same plots. We assessed biomass-to-volume relationships using zero-intercept regressions, and compared differences among two non-destructive cover estimation methods (ocular vs. point count), among four landcover types in two ecoregions, and among single-taxon vs. multi-taxa samples. Additionally, we explored the feasibility of using lichen height (instead of volume) as a predictor of stand-level biomass. Although lichen taxa exhibited unique biomass and bulk density responses that varied significantly by growth form, we found that single-taxon sampling consistently under-estimated true biomass and was constrained by the need for taxonomic experts. We also found that the point count method provided little to no improvement over ocular methods, despite increased effort. Estimated biomass of lichen-dominated communities (mean lichen cover: 84.9±1.4%) using multi-taxa, ocular methods differed only nominally among landcover types within ecoregions (range: 822 to 1418 g m-2). Height alone was a poor predictor of lichen biomass and should always be weighted by cover abundance. We conclude that the multi-taxa (whole-community) approach, when paired with ocular estimates, is the most reasonable and practical method for estimating lichen biomass at landscape scales in northwest Alaska.
Improved vertical streambed flux estimation using multiple diurnal temperature methods in series
Irvine, Dylan J.; Briggs, Martin A.; Cartwright, Ian; Scruggs, Courtney; Lautz, Laura K.
2017-01-01
Analytical solutions that use diurnal temperature signals to estimate vertical fluxes between groundwater and surface water based on either amplitude ratios (Ar) or phase shifts (Δϕ) produce results that rarely agree. Analytical solutions that simultaneously utilize Ar and Δϕ within a single solution have more recently been derived, decreasing uncertainty in flux estimates in some applications. Benefits of combined (ArΔϕ) methods also include that thermal diffusivity and sensor spacing can be calculated. However, poor identification of either Ar or Δϕ from raw temperature signals can lead to erratic parameter estimates from ArΔϕ methods. An add-on program for VFLUX 2 is presented to address this issue. Using thermal diffusivity selected from an ArΔϕ method during a reliable time period, fluxes are recalculated using an Ar method. This approach maximizes the benefits of the Ar and ArΔϕ methods. Additionally, sensor spacing calculations can be used to identify periods with unreliable flux estimates, or to assess streambed scour. Using synthetic and field examples, the use of these solutions in series was particularly useful for gaining conditions where fluxes exceeded 1 m/d.
Digital baseline estimation method for multi-channel pulse height analyzing
International Nuclear Information System (INIS)
Xiao Wuyun; Wei Yixiang; Ai Xianyun
2005-01-01
The basic features of digital baseline estimation for multi-channel pulse height analysis are introduced. The weight-function of minimum-noise baseline filter is deduced with functional variational calculus. The frequency response of this filter is also deduced with Fourier transformation, and the influence of parameters on amplitude frequency response characteristics is discussed. With MATLAB software, the noise voltage signal from the charge sensitive preamplifier is simulated, and the processing effect of minimum-noise digital baseline estimation is verified. According to the results of this research, digital baseline estimation method can estimate baseline optimally, and it is very suitable to be used in digital multi-channel pulse height analysis. (authors)
Comparing four methods to estimate usual intake distributions
Souverein, O.W.; Dekkers, A.L.; Geelen, A.; Haubrock, J.; Vries, de J.H.M.; Ocke, M.C.; Harttig, U.; Boeing, H.; Veer, van 't P.
2011-01-01
Background/Objectives: The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As ‘true’ usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data
Review of best estimate plus uncertainty methods of thermal-hydraulic safety analysis
International Nuclear Information System (INIS)
Prosek, A.; Mavko, B.
2003-01-01
In 1988 United States Nuclear Regulatory Commission approved the revised rule on the acceptance of emergency core cooling system (ECCS) performance. Since that there has been significant interest in the development of codes and methodologies for best-estimate loss-of-coolant accident (LOCAs) analyses. Several new best estimate plus uncertainty methods (BEPUs) were developed in the world. The purpose of the paper is to review the developments in the direction of best estimate approaches with uncertainty quantification and to discuss the problems in practical applications of BEPU methods. In general, the licensee methods are following original methods. The study indicated that uncertainty analysis with random sampling of input parameters and the use of order statistics for desired tolerance limits of output parameters is today commonly accepted and mature approach. (author)
SCoPE: an efficient method of Cosmological Parameter Estimation
International Nuclear Information System (INIS)
Das, Santanu; Souradeep, Tarun
2014-01-01
Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of the chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data
Comparative study of various methods of primary energy estimation in nucleon-nucleon interactions
International Nuclear Information System (INIS)
Goyal, D.P.; Yugindro Singh, K.; Singh, S.
1986-01-01
The various available methods for the estimation of primary energy in nucleon-nucleon interactions have been examined by using the experimental data on angular distributions of shower particles from p-N interactions at two accelerator energies, 67 and 400 GeV. Three different groups of shower particle multiplicities have been considered for interactions at both energies. It is found that the different methods give quite different estimates of primary energy. Moreover, each method is found to give different values of energy according to the choice of multiplicity groups. It is concluded that the E ch method is relatively the better method among all the methods available, and that within this method, the consideration of the group of small multiplicities gives a much better result. The method also yields plausible estimates of inelasticity in high energy nucleon-nucleon interactions. (orig.)
Directory of Open Access Journals (Sweden)
N. D. Tiannikova
2014-01-01
Full Text Available G.D. Kartashov has developed a technique to determine the rapid testing results scaling functions to the normal mode. Its feature is preliminary tests of products of one sample including tests using the alternating modes. Standard procedure of preliminary tests (researches is as follows: n groups of products with m elements in each start being tested in normal mode and, after a failure of one of products in the group, the remained products are tested in accelerated mode. In addition to tests in alternating mode, tests in constantly normal mode are conducted as well. The acceleration factor of rapid tests for this type of products, identical to any lots is determined using such testing results of products from the same lot. A drawback of this technique is that tests are to be conducted in alternating mode till the failure of all products. That is not always is possible. To avoid this shortcoming, the Renyi criterion is offered. It allows us to determine scaling functions using the right-censored data thus giving the opportunity to stop testing prior to all failures of products.In this work a statistical modeling of the acceleration factor estimation owing to Renyi statistics minimization is implemented by the Monte-Carlo method. Results of modeling show that the acceleration factor estimation obtained through Renyi statistics minimization is conceivable for rather large n . But for small sample volumes some systematic bias of acceleration factor estimation, which decreases with growth n is observed for both distributions (exponential and Veybull's distributions. Therefore the paper also presents calculation results of correction factors for a case of exponential distribution and Veybull's distribution.
A TOA-AOA-Based NLOS Error Mitigation Method for Location Estimation
Directory of Open Access Journals (Sweden)
Tianshuang Qiu
2007-12-01
Full Text Available This paper proposes a geometric method to locate a mobile station (MS in a mobile cellular network when both the range and angle measurements are corrupted by non-line-of-sight (NLOS errors. The MS location is restricted to an enclosed region by geometric constraints from the temporal-spatial characteristics of the radio propagation channel. A closed-form equation of the MS position, time of arrival (TOA, angle of arrival (AOA, and angle spread is provided. The solution space of the equation is very large because the angle spreads are random variables in nature. A constrained objective function is constructed to further limit the MS position. A Lagrange multiplier-based solution and a numerical solution are proposed to resolve the MS position. The estimation quality of the estimator in term of Ã¢Â€ÂœbiasedÃ¢Â€Â or Ã¢Â€ÂœunbiasedÃ¢Â€Â is discussed. The scale factors, which may be used to evaluate NLOS propagation level, can be estimated by the proposed method. AOA seen at base stations may be corrected to some degree. The performance comparisons among the proposed method and other hybrid location methods are investigated on different NLOS error models and with two scenarios of cell layout. It is found that the proposed method can deal with NLOS error effectively, and it is attractive for location estimation in cellular networks.
Directory of Open Access Journals (Sweden)
E. O. Adam
2017-11-01
Full Text Available The arid and semi-arid catchments in dry lands in general require a special effective management as the scarcity of resources and information which is needed to leverage studies and investigations is the common characteristic. Hydrology is one of the most important elements in the management of resources. Deep understanding of hydrological responses is the key towards better planning and land management. Surface runoff quantification of such ungauged semi-arid catchments considered among the important challenges. A 7586 km2 catchment under investigation is located in semi-arid region in central Sudan where mean annual rainfall is around 250 mm and represent the ultimate source for water supply. The objective is to parameterize hydrological characteristics of the catchment and estimate surface runoff using suitable methods and hydrological models that suit the nature of such ungauged catchments with scarce geospatial information. In order to produce spatial runoff estimations, satellite rainfall was used. Remote sensing and GIS were incorporated in the investigations and the generation of landcover and soil information. Five days rainfall event (50.2 mm was used for the SCS CN model which is considered the suitable for this catchment, as SCS curve number (CN method is widely used for estimating infiltration characteristics depending on the landcover and soil property. Runoff depths of 3.6, 15.7 and 29.7 mm were estimated for the three different Antecedent Moisture Conditions (AMC-I, AMC-II and AMC-III. The estimated runoff depths of AMCII and AMCIII indicate the possibility of having small artificial surface reservoirs that could provide water for domestic and small household agricultural use.
Performance evaluation of the spectral centroid downshift method for attenuation estimation.
Samimi, Kayvan; Varghese, Tomy
2015-05-01
Estimation of frequency-dependent ultrasonic attenuation is an important aspect of tissue characterization. Along with other acoustic parameters studied in quantitative ultrasound, the attenuation coefficient can be used to differentiate normal and pathological tissue. The spectral centroid downshift (CDS) method is one the most common frequencydomain approaches applied to this problem. In this study, a statistical analysis of this method's performance was carried out based on a parametric model of the signal power spectrum in the presence of electronic noise. The parametric model used for the power spectrum of received RF data assumes a Gaussian spectral profile for the transmit pulse, and incorporates effects of attenuation, windowing, and electronic noise. Spectral moments were calculated and used to estimate second-order centroid statistics. A theoretical expression for the variance of a maximum likelihood estimator of attenuation coefficient was derived in terms of the centroid statistics and other model parameters, such as transmit pulse center frequency and bandwidth, RF data window length, SNR, and number of regression points. Theoretically predicted estimation variances were compared with experimentally estimated variances on RF data sets from both computer-simulated and physical tissue-mimicking phantoms. Scan parameter ranges for this study were electronic SNR from 10 to 70 dB, transmit pulse standard deviation from 0.5 to 4.1 MHz, transmit pulse center frequency from 2 to 8 MHz, and data window length from 3 to 17 mm. Acceptable agreement was observed between theoretical predictions and experimentally estimated values with differences smaller than 0.05 dB/cm/MHz across the parameter ranges investigated. This model helps predict the best attenuation estimation variance achievable with the CDS method, in terms of said scan parameters.
Rebillat, Marc; Schoukens, Maarten
2018-05-01
Linearity is a common assumption for many real-life systems, but in many cases the nonlinear behavior of systems cannot be ignored and must be modeled and estimated. Among the various existing classes of nonlinear models, Parallel Hammerstein Models (PHM) are interesting as they are at the same time easy to interpret as well as to estimate. One way to estimate PHM relies on the fact that the estimation problem is linear in the parameters and thus that classical least squares (LS) estimation algorithms can be used. In that area, this article introduces a regularized LS estimation algorithm inspired on some of the recently developed regularized impulse response estimation techniques. Another mean to estimate PHM consists in using parametric or non-parametric exponential sine sweeps (ESS) based methods. These methods (LS and ESS) are founded on radically different mathematical backgrounds but are expected to tackle the same issue. A methodology is proposed here to compare them with respect to (i) their accuracy, (ii) their computational cost, and (iii) their robustness to noise. Tests are performed on simulated systems for several values of methods respective parameters and of signal to noise ratio. Results show that, for a given set of data points, the ESS method is less demanding in computational resources than the LS method but that it is also less accurate. Furthermore, the LS method needs parameters to be set in advance whereas the ESS method is not subject to conditioning issues and can be fully non-parametric. In summary, for a given set of data points, ESS method can provide a first, automatic, and quick overview of a nonlinear system than can guide more computationally demanding and precise methods, such as the regularized LS one proposed here.
Using Finite Element Method to Estimate the Material Properties of a Bearing Cage
2018-02-01
result, a novel approach was developed to empirically test the phenolic cage and to determine the respective elastic and failure material properties...was available. As a result, a novel approach was developed to empirically test the phenolic cage and to determine the respective elastic and...was defeatured to decrease computing time, and the tooling was made rigid. The elements employed were 8-node brick elements with reduced integrations
Density Estimation in Several Populations With Uncertain Population Membership
Ma, Yanyuan
2011-09-01
We devise methods to estimate probability density functions of several populations using observations with uncertain population membership, meaning from which population an observation comes is unknown. The probability of an observation being sampled from any given population can be calculated. We develop general estimation procedures and bandwidth selection methods for our setting. We establish large-sample properties and study finite-sample performance using simulation studies. We illustrate our methods with data from a nutrition study.
Estimation method for volumes of hot spots created by heavy ions
International Nuclear Information System (INIS)
Kanno, Ikuo; Kanazawa, Satoshi; Kajii, Yuji
1999-01-01
As a ratio of volumes of hot spots to cones, which have the same lengths and bottom radii with the ones of hot spots, a simple and convenient method for estimating the volumes of hot spots is described. This calculation method is useful for the study of damage producing mechanism in hot spots, and is also convenient for the estimation of the electron-hole densities in plasma columns created by heavy ions in semiconductor detectors. (author)
Robust methods and asymptotic theory in nonlinear econometrics
Bierens, Herman J
1981-01-01
This Lecture Note deals with asymptotic properties, i.e. weak and strong consistency and asymptotic normality, of parameter estimators of nonlinear regression models and nonlinear structural equations under various assumptions on the distribution of the data. The estimation methods involved are nonlinear least squares estimation (NLLSE), nonlinear robust M-estimation (NLRME) and non linear weighted robust M-estimation (NLWRME) for the regression case and nonlinear two-stage least squares estimation (NL2SLSE) and a new method called minimum information estimation (MIE) for the case of structural equations. The asymptotic properties of the NLLSE and the two robust M-estimation methods are derived from further elaborations of results of Jennrich. Special attention is payed to the comparison of the asymptotic efficiency of NLLSE and NLRME. It is shown that if the tails of the error distribution are fatter than those of the normal distribution NLRME is more efficient than NLLSE. The NLWRME method is appropriate ...
Information-theoretic methods for estimating of complicated probability distributions
Zong, Zhi
2006-01-01
Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur
A subagging regression method for estimating the qualitative and quantitative state of groundwater
Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young
2017-08-01
A subsample aggregating (subagging) regression (SBR) method for the analysis of groundwater data pertaining to trend-estimation-associated uncertainty is proposed. The SBR method is validated against synthetic data competitively with other conventional robust and non-robust methods. From the results, it is verified that the estimation accuracies of the SBR method are consistent and superior to those of other methods, and the uncertainties are reasonably estimated; the others have no uncertainty analysis option. To validate further, actual groundwater data are employed and analyzed comparatively with Gaussian process regression (GPR). For all cases, the trend and the associated uncertainties are reasonably estimated by both SBR and GPR regardless of Gaussian or non-Gaussian skewed data. However, it is expected that GPR has a limitation in applications to severely corrupted data by outliers owing to its non-robustness. From the implementations, it is determined that the SBR method has the potential to be further developed as an effective tool of anomaly detection or outlier identification in groundwater state data such as the groundwater level and contaminant concentration.
The efficiency of different estimation methods of hydro-physical limits
Directory of Open Access Journals (Sweden)
Emma María Martínez
2012-12-01
Full Text Available The soil water available to crops is defined by specific values of water potential limits. Underlying the estimation of hydro-physical limits, identified as permanent wilting point (PWP and field capacity (FC, is the selection of a suitable method based on a multi-criteria analysis that is not always clear and defined. In this kind of analysis, the time required for measurements must be taken into consideration as well as other external measurement factors, e.g., the reliability and suitability of the study area, measurement uncertainty, cost, effort and labour invested. In this paper, the efficiency of different methods for determining hydro-physical limits is evaluated by using indices that allow for the calculation of efficiency in terms of effort and cost. The analysis evaluates both direct determination methods (pressure plate - PP and water activity meter - WAM and indirect estimation methods (pedotransfer functions - PTFs. The PTFs must be validated for the area of interest before use, but the time and cost associated with this validation are not included in the cost of analysis. Compared to the other methods, the combined use of PP and WAM to determine hydro-physical limits differs significantly in time and cost required and quality of information. For direct methods, increasing sample size significantly reduces cost and time. This paper assesses the effectiveness of combining a general analysis based on efficiency indices and more specific analyses based on the different influencing factors, which were considered separately so as not to mask potential benefits or drawbacks that are not evidenced in efficiency estimation.
Estimation and uncertainty of reversible Markov models.
Trendelkamp-Schroer, Benjamin; Wu, Hao; Paul, Fabian; Noé, Frank
2015-11-07
Reversibility is a key concept in Markov models and master-equation models of molecular kinetics. The analysis and interpretation of the transition matrix encoding the kinetic properties of the model rely heavily on the reversibility property. The estimation of a reversible transition matrix from simulation data is, therefore, crucial to the successful application of the previously developed theory. In this work, we discuss methods for the maximum likelihood estimation of transition matrices from finite simulation data and present a new algorithm for the estimation if reversibility with respect to a given stationary vector is desired. We also develop new methods for the Bayesian posterior inference of reversible transition matrices with and without given stationary vector taking into account the need for a suitable prior distribution preserving the meta-stable features of the observed process during posterior inference. All algorithms here are implemented in the PyEMMA software--http://pyemma.org--as of version 2.0.
A generic method for estimating system reliability using Bayesian networks
International Nuclear Information System (INIS)
Doguc, Ozge; Ramirez-Marquez, Jose Emmanuel
2009-01-01
This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples
A generic method for estimating system reliability using Bayesian networks
Energy Technology Data Exchange (ETDEWEB)
Doguc, Ozge [Stevens Institute of Technology, Hoboken, NJ 07030 (United States); Ramirez-Marquez, Jose Emmanuel [Stevens Institute of Technology, Hoboken, NJ 07030 (United States)], E-mail: jmarquez@stevens.edu
2009-02-15
This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples.
An Overview and Comparison of Online Implementable SOC Estimation Methods for Lithium-ion Battery
DEFF Research Database (Denmark)
Meng, Jinhao; Ricco, Mattia; Luo, Guangzhao
2018-01-01
. Many SOC estimation methods have been proposed in the literature. However, only a few of them consider the real-time applicability. This paper reviews recently proposed online SOC estimation methods and classifies them into five categories. Their principal features are illustrated, and the main pros...... and cons are provided. The SOC estimation methods are compared and discussed in terms of accuracy, robustness, and computation burden. Afterward, as the most popular type of model based SOC estimation algorithms, seven nonlinear filters existing in literature are compared in terms of their accuracy...
Wu, Zhihong; Lu, Ke; Zhu, Yuan
2015-01-01
The torque output accuracy of the IPMSM in electric vehicles using a state of the art MTPA strategy highly depends on the accuracy of machine parameters, thus, a torque estimation method is necessary for the safety of the vehicle. In this paper, a torque estimation method based on flux estimator with a modified low pass filter is presented. Moreover, by taking into account the non-ideal characteristic of the inverter, the torque estimation accuracy is improved significantly. The effectiveness of the proposed method is demonstrated through MATLAB/Simulink simulation and experiment.
A simple method for estimating the convection- dispersion equation ...
African Journals Online (AJOL)
Jane
2011-08-31
Aug 31, 2011 ... approach of modeling solute transport in porous media uses the deterministic ... Methods of estimating CDE transport parameters can be divided into statistical ..... diffusion-type model for longitudinal mixing of fluids in flow.
Estimating Mean and Variance Through Quantiles : An Experimental Comparison of Different Methods
Moors, J.J.A.; Strijbosch, L.W.G.; van Groenendaal, W.J.H.
2002-01-01
If estimates of mean and variance are needed and only experts' opinions are available, the literature agrees that it is wise behaviour to ask only for their (subjective) estimates of quantiles: from these, estimates of the desired parameters are calculated.Quite a number of methods have been
An Empirical Method to Fuse Partially Overlapping State Vectors for Distributed State Estimation
Sijs, J.; Hanebeck, U.; Noack, B.
2013-01-01
State fusion is a method for merging multiple estimates of the same state into a single fused estimate. Dealing with multiple estimates is one of the main concerns in distributed state estimation, where an estimated value of the desired state vector is computed in each node of a networked system.
Snyder, Noah P.; Rubin, David M.; Alpers, Charles N.; Childs, Jonathan R.; Curtis, Jennifer A.; Flint, Lorraine E.; Wright, Scott A.
2004-01-01
Studies of reservoir sedimentation are vital to understanding scientific and management issues related to watershed sediment budgets, depositional processes, reservoir operations, and dam decommissioning. Here we quantify the mass, organic content, and grain-size distribution of a reservoir deposit in northern California by two methods of extrapolating measurements of sediment physical properties from cores to the entire volume of impounded material. Englebright Dam, completed in 1940, is located on the Yuba River in the Sierra Nevada foothills. A research program is underway to assess the feasibility of introducing wild anadromous fish species to the river upstream of the dam. Possible management scenarios include removing or lowering the dam, which could cause downstream transport of stored sediment. In 2001 the volume of sediments deposited behind Englebright Dam occupied 25.5% of the original reservoir capacity. The physical properties of this deposit were calculated using data from a coring campaign that sampled the entire reservoir sediment thickness (6–32 m) at six locations in the downstream ∼3/4 of the reservoir. As a result, the sediment in the downstream part of the reservoir is well characterized, but in the coarse, upstream part of the reservoir, only surficial sediments were sampled, so calculations there are more uncertain. Extrapolation from one-dimensional vertical sections of sediment sampled in cores to entire three-dimensional volumes of the reservoir deposit is accomplished via two methods, using assumptions of variable and constant layer thickness. Overall, the two extrapolation methods yield nearly identical estimates of the mass of the reservoir deposit of ∼26 × 106 metric tons (t) of material, of which 64.7–68.5% is sand and gravel. Over the 61 year reservoir history this corresponds to a maximum basin-wide sediment yield of ∼340 t/km2/yr, assuming no contribution from upstream parts of the watershed impounded by other dams. The
The Software Cost Estimation Method Based on Fuzzy Ontology
Directory of Open Access Journals (Sweden)
Plecka Przemysław
2014-12-01
Full Text Available In the course of sales process of Enterprise Resource Planning (ERP Systems, it turns out that the standard system must be extended or changed (modified according to specific customer’s requirements. Therefore, suppliers face the problem of determining the cost of additional works. Most methods of cost estimation bring satisfactory results only at the stage of pre-implementation analysis. However, suppliers need to know the estimated cost as early as at the stage of trade talks. During contract negotiations, they expect not only the information about the costs of works, but also about the risk of exceeding these costs or about the margin of safety. One method that gives more accurate results at the stage of trade talks is the method based on the ontology of implementation costs. This paper proposes modification of the method involving the use of fuzzy attributes, classes, instances and relations in the ontology. The result provides not only the information about the value of work, but also about the minimum and maximum expected cost, and the most likely range of costs. This solution allows suppliers to effectively negotiate the contract and increase the chances of successful completion of the project.
International Nuclear Information System (INIS)
Hecht, M.J.; Catton, I.; Kastenberg, W.E.
1976-12-01
An equation of state based on the properties of normal fluids, the law of rectilinear averages, and the second law of thermodynamics can be derived for advanced LMFBR fuels on the basis of the vapor pressure, enthalpy of vaporization, change in heat capacity upon vaporization, and liquid density at the melting point. The method consists of estimating an equation of state by means of the law of rectilinear averages and the second law of thermodynamics, integrating by means of the second law until an instability is reached, and then extrapolating by means of a self-consistent estimation of the enthalpy of vaporization
Mehmandoust, Babak; Sanjari, Ehsan; Vatani, Mostafa
2014-03-01
The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range (20.3-722 K).
Methods for estimating heterocyclic amine concentrations in cooked meats in the US diet.
Keating, G A; Bogen, K T
2001-01-01
Heterocyclic amines (HAs) are formed in numerous cooked foods commonly consumed in the diet. A method was developed to estimate dietary HA levels using HA concentrations in experimentally cooked meats reported in the literature and meat consumption data obtained from a national dietary survey. Cooking variables (meat internal temperature and weight loss, surface temperature and time) were used to develop relationships for estimating total HA concentrations in six meat types. Concentrations of five individual HAs were estimated for specific meat type/cooking method combinations based on linear regression of total and individual HA values obtained from the literature. Using these relationships, total and individual HA concentrations were estimated for 21 meat type/cooking method combinations at four meat doneness levels. Reported consumption of the 21 meat type/cooking method combinations was obtained from a national dietary survey and the age-specific daily HA intake calculated using the estimated HA concentrations (ng/g) and reported meat intakes. Estimated mean daily total HA intakes for children (to age 15 years) and adults (30+ years) were 11 and 7.0 ng/kg/day, respectively, with 2-amino-1-methyl-6-phenylimidazo[4,5-b]pyridine (PhIP) estimated to comprise approximately 65% of each intake. Pan-fried meats were the largest source of HA in the diet and chicken the largest source of HAs among the different meat types.
Method to Estimate the Dissolved Air Content in Hydraulic Fluid
Hauser, Daniel M.
2011-01-01
In order to verify the air content in hydraulic fluid, an instrument was needed to measure the dissolved air content before the fluid was loaded into the system. The instrument also needed to measure the dissolved air content in situ and in real time during the de-aeration process. The current methods used to measure the dissolved air content require the fluid to be drawn from the hydraulic system, and additional offline laboratory processing time is involved. During laboratory processing, there is a potential for contamination to occur, especially when subsaturated fluid is to be analyzed. A new method measures the amount of dissolved air in hydraulic fluid through the use of a dissolved oxygen meter. The device measures the dissolved air content through an in situ, real-time process that requires no additional offline laboratory processing time. The method utilizes an instrument that measures the partial pressure of oxygen in the hydraulic fluid. By using a standardized calculation procedure that relates the oxygen partial pressure to the volume of dissolved air in solution, the dissolved air content is estimated. The technique employs luminescent quenching technology to determine the partial pressure of oxygen in the hydraulic fluid. An estimated Henry s law coefficient for oxygen and nitrogen in hydraulic fluid is calculated using a standard method to estimate the solubility of gases in lubricants. The amount of dissolved oxygen in the hydraulic fluid is estimated using the Henry s solubility coefficient and the measured partial pressure of oxygen in solution. The amount of dissolved nitrogen that is in solution is estimated by assuming that the ratio of dissolved nitrogen to dissolved oxygen is equal to the ratio of the gas solubility of nitrogen to oxygen at atmospheric pressure and temperature. The technique was performed at atmospheric pressure and room temperature. The technique could be theoretically carried out at higher pressures and elevated
Estimation of the mechanical properties of the eye through the study of its vibrational modes.
Directory of Open Access Journals (Sweden)
M Á Aloy
Full Text Available Measuring the eye's mechanical properties in vivo and with minimally invasive techniques can be the key for individualized solutions to a number of eye pathologies. The development of such techniques largely relies on a computational modelling of the eyeball and, it optimally requires the synergic interplay between experimentation and numerical simulation. In Astrophysics and Geophysics the remote measurement of structural properties of the systems of their realm is performed on the basis of (helio-seismic techniques. As a biomechanical system, the eyeball possesses normal vibrational modes encompassing rich information about its structure and mechanical properties. However, the integral analysis of the eyeball vibrational modes has not been performed yet. Here we develop a new finite difference method to compute both the spheroidal and, specially, the toroidal eigenfrequencies of the human eye. Using this numerical model, we show that the vibrational eigenfrequencies of the human eye fall in the interval 100 Hz-10 MHz. We find that compressible vibrational modes may release a trace on high frequency changes of the intraocular pressure, while incompressible normal modes could be registered analyzing the scattering pattern that the motions of the vitreous humour leave on the retina. Existing contact lenses with embebed devices operating at high sampling frequency could be used to register the microfluctuations of the eyeball shape we obtain. We advance that an inverse problem to obtain the mechanical properties of a given eye (e.g., Young's modulus, Poisson ratio measuring its normal frequencies is doable. These measurements can be done using non-invasive techniques, opening very interesting perspectives to estimate the mechanical properties of eyes in vivo. Future research might relate various ocular pathologies with anomalies in measured vibrational frequencies of the eye.
Method for estimating capacity and predicting remaining useful life of lithium-ion battery
International Nuclear Information System (INIS)
Hu, Chao; Jain, Gaurav; Tamirisa, Prabhakar; Gorka, Tom
2014-01-01
Highlights: • We develop an integrated method for the capacity estimation and RUL prediction. • A state projection scheme is derived for capacity estimation. • The Gauss–Hermite particle filter technique is used for the RUL prediction. • Results with 10 years’ continuous cycling data verify the effectiveness of the method. - Abstract: Reliability of lithium-ion (Li-ion) rechargeable batteries used in implantable medical devices has been recognized as of high importance from a broad range of stakeholders, including medical device manufacturers, regulatory agencies, physicians, and patients. To ensure Li-ion batteries in these devices operate reliably, it is important to be able to assess the capacity of Li-ion battery and predict the remaining useful life (RUL) throughout the whole life-time. This paper presents an integrated method for the capacity estimation and RUL prediction of Li-ion battery used in implantable medical devices. A state projection scheme from the author’s previous study is used for the capacity estimation. Then, based on the capacity estimates, the Gauss–Hermite particle filter technique is used to project the capacity fade to the end-of-service (EOS) value (or the failure limit) for the RUL prediction. Results of 10 years’ continuous cycling test on Li-ion prismatic cells in the lab suggest that the proposed method achieves good accuracy in the capacity estimation and captures the uncertainty in the RUL prediction. Post-explant weekly cycling data obtained from field cells with 4–7 implant years further verify the effectiveness of the proposed method in the capacity estimation
Method for estimating modulation transfer function from sample images.
Saiga, Rino; Takeuchi, Akihisa; Uesugi, Kentaro; Terada, Yasuko; Suzuki, Yoshio; Mizutani, Ryuta
2018-02-01
The modulation transfer function (MTF) represents the frequency domain response of imaging modalities. Here, we report a method for estimating the MTF from sample images. Test images were generated from a number of images, including those taken with an electron microscope and with an observation satellite. These original images were convolved with point spread functions (PSFs) including those of circular apertures. The resultant test images were subjected to a Fourier transformation. The logarithm of the squared norm of the Fourier transform was plotted against the squared distance from the origin. Linear correlations were observed in the logarithmic plots, indicating that the PSF of the test images can be approximated with a Gaussian. The MTF was then calculated from the Gaussian-approximated PSF. The obtained MTF closely coincided with the MTF predicted from the original PSF. The MTF of an x-ray microtomographic section of a fly brain was also estimated with this method. The obtained MTF showed good agreement with the MTF determined from an edge profile of an aluminum test object. We suggest that this approach is an alternative way of estimating the MTF, independently of the image type. Copyright © 2017 Elsevier Ltd. All rights reserved.
A projection and density estimation method for knowledge discovery.
Directory of Open Access Journals (Sweden)
Adam Stanski
Full Text Available A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.
Equation of motion for estimation fidelity of monitored oscillating qubits
CSIR Research Space (South Africa)
Bassa, H
2017-08-01
Full Text Available We study the convergence properties of state estimates of an oscillating qubit being monitored by a sequence of discrete, unsharp measurements. Our method derives a differential equation determining the evolution of the estimation fidelity from a...
Clement, Matthew; O'Keefe, Joy M; Walters, Brianne
2015-01-01
While numerous methods exist for estimating abundance when detection is imperfect, these methods may not be appropriate due to logistical difficulties or unrealistic assumptions. In particular, if highly mobile taxa are frequently absent from survey locations, methods that estimate a probability of detection conditional on presence will generate biased abundance estimates. Here, we propose a new estimator for estimating abundance of mobile populations using telemetry and counts of unmarked animals. The estimator assumes that the target population conforms to a fission-fusion grouping pattern, in which the population is divided into groups that frequently change in size and composition. If assumptions are met, it is not necessary to locate all groups in the population to estimate abundance. We derive an estimator, perform a simulation study, conduct a power analysis, and apply the method to field data. The simulation study confirmed that our estimator is asymptotically unbiased with low bias, narrow confidence intervals, and good coverage, given a modest survey effort. The power analysis provided initial guidance on survey effort. When applied to small data sets obtained by radio-tracking Indiana bats, abundance estimates were reasonable, although imprecise. The proposed method has the potential to improve abundance estimates for mobile species that have a fission-fusion social structure, such as Indiana bats, because it does not condition detection on presence at survey locations and because it avoids certain restrictive assumptions.
Duchêne, Sebastián; Geoghegan, Jemma L; Holmes, Edward C; Ho, Simon Y W
2016-11-15
In rapidly evolving pathogens, including viruses and some bacteria, genetic change can accumulate over short time-frames. Accordingly, their sampling times can be used to calibrate molecular clocks, allowing estimation of evolutionary rates. Methods for estimating rates from time-structured data vary in how they treat phylogenetic uncertainty and rate variation among lineages. We compiled 81 virus data sets and estimated nucleotide substitution rates using root-to-tip regression, least-squares dating and Bayesian inference. Although estimates from these three methods were often congruent, this largely relied on the choice of clock model. In particular, relaxed-clock models tended to produce higher rate estimates than methods that assume constant rates. Discrepancies in rate estimates were also associated with high among-lineage rate variation, and phylogenetic and temporal clustering. These results provide insights into the factors that affect the reliability of rate estimates from time-structured sequence data, emphasizing the importance of clock-model testing. sduchene@unimelb.edu.au or garzonsebastian@hotmail.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Estimation of Cross-Lingual News Similarities Using Text-Mining Methods
Directory of Open Access Journals (Sweden)
Zhouhao Wang
2018-01-01
Full Text Available In this research, two estimation algorithms for extracting cross-lingual news pairs based on machine learning from financial news articles have been proposed. Every second, innumerable text data, including all kinds news, reports, messages, reviews, comments, and tweets are generated on the Internet, and these are written not only in English but also in other languages such as Chinese, Japanese, French, etc. By taking advantage of multi-lingual text resources provided by Thomson Reuters News, we developed two estimation algorithms for extracting cross-lingual news pairs from multilingual text resources. In our first method, we propose a novel structure that uses the word information and the machine learning method effectively in this task. Simultaneously, we developed a bidirectional Long Short-Term Memory (LSTM based method to calculate cross-lingual semantic text similarity for long text and short text, respectively. Thus, when an important news article is published, users can read similar news articles that are written in their native language using our method.
Cumulant-Based Coherent Signal Subspace Method for Bearing and Range Estimation
Directory of Open Access Journals (Sweden)
Bourennane Salah
2007-01-01
Full Text Available A new method for simultaneous range and bearing estimation for buried objects in the presence of an unknown Gaussian noise is proposed. This method uses the MUSIC algorithm with noise subspace estimated by using the slice fourth-order cumulant matrix of the received data. The higher-order statistics aim at the removal of the additive unknown Gaussian noise. The bilinear focusing operator is used to decorrelate the received signals and to estimate the coherent signal subspace. A new source steering vector is proposed including the acoustic scattering model at each sensor. Range and bearing of the objects at each sensor are expressed as a function of those at the first sensor. This leads to the improvement of object localization anywhere, in the near-field or in the far-field zone of the sensor array. Finally, the performances of the proposed method are validated on data recorded during experiments in a water tank.
The use of maturity method in estimating concrete strength
International Nuclear Information System (INIS)
Salama, A.E.; Abd El-Baky, S.M.; Ali, E.E.; Ghanem, G.M.
2005-01-01
Prediction of the early age strength of concrete is essential for modernized concrete for construction as well as for manufacturing of structural parts. Safe and economic scheduling of such critical operations as form removal and re shoring, application of post-tensioning or other mechanical treatment, and in process transportation and rapid delivery of products all should be based upon a good grasp of the strength development of the concrete in use. For many years, it has been proposed tha