Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
The biochemical composition of plankton in a subsurface chlorophyll maximum
Dortch, Quay
1987-06-01
The biochemical composition of plankton at a station with a deep, subsurface chlorophyll maximum (SCM) below a nitrogen-depleted surface layer off the Washington coast was determined in order to answer long-standing questions about the nature and causes of SCM. The chlorophyll maximum did not correspond to a protein-biomass maximum, and chlorophyll: protein ratios indicate that only in the SCM were phytoplankton a major constituent of the total biomass. Ratios of free amino acids: protein in the particulate matter were high at all depths in the euphotic zone. From this it can be concluded that phytoplankton in the SCM are N-sufficient, since they make up 80-90% of the biomass there. Above and below the SCM, where non-phytoplankton predominate, the state of N deficiency or sufficiency of the phytoplankton cannot be ascertained until more is known about how the chemical composition of phytoplankton, zooplankton and bacteria are related. However, if it is assumed that very N-sufficient zooplankton and bacteria would not coexist with very N-deficient phytoplankton, then it seems likely that the phytoplankton were also N-sufficient or nearly so. Thus, the biochemical indicators do not support the hypothesis that the SCM forms because it represents the only layer in the water column with adequate N and light for phytoplankton growth. Comparison of the chlorophyll: protein ratios with those from cultures and from other regions suggests that oligotrophic areas have a much higher proportion of non-phytoplankton biomass than do eutrophic areas.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Estimating Hydrologic Processes from Subsurface Soil Displacements
Freeman, C. E.; Murdoch, L. C.; Germanovich, L.; MIller, S.
2012-12-01
Soil moisture and the processes that control it are important components of the hydrologic cycle, but measuring these processes remains challenging. We have developed a new measurement method that offers flexibility compared to existing technology. The approach is to measure small vertical displacements in the soil which responds proportionally to distributed surface load changes such as variation in the near-surface water content. The instrument may be installed at a depth of several meters to hundreds of meters below the surface. Because the measurement averaging region scales with the depth of the displacement measurements, this approach provides the means for estimating the soil moisture time series over tens of square meters to tens of thousands of square meters. The instrument developed for this application is called a Sand-X, which is short for Sand Extensometer. It is designed for applications in unconsolidated material, ranging from clay to sand. The instrument is simple and relatively inexpensive, and it can be installed in a boring made with a hand auger or with a small drill rig. Studies at the field scale are ongoing at a field site near Clemson, SC. The site is underlain by saprolite weathered primarily from biotite gneiss. Several Sand-X devices are installed at a field site that is instrumented for validating soil moisture, precipitation, and evapotranspiration estimates. These instruments are emplaced at a depth of 6 m and respond to the weight of a vehicle out to 18 m from the well. Calibration is performed by comparing precipitation measurements to the soil displacement response. For example, the coefficient for one installation is roughly 185 nm soil displacement/mm water content change. The resolution of the instrument is approximately 10 nm, so the Sand-X is capable of detecting changes of soil moisture on the order of tenths of one mm in compliant soils like saprolite. A typical soil displacement time series shows alternating periods of
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
MAXIMUM INFORMATION AND OPTIMUM ESTIMATING FUNCTION
林路
2003-01-01
In order to construct estimating functions in some parametric models, this paper introducestwo classes of information matrices. Some necessary and sufficient conditions for the informationmatrices achieving their upper bounds are given. For the problem of estimating the median,some optimum estimating functions based on the information matrices are acquired. Undersome regularity conditions, an approach to carrying out the best basis function is introduced. Innonlinear regression models, an optimum estimating function based on the information matricesis obtained. Some examples are given to illustrate the results. Finally, the concept of optimumestimating function and the methods of constructing optimum estimating function are developedin more general statistical models.
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
to the equilibrium parameters and the variance-covariance matrix of the error term. We show that using ML principles to estimate jointly all parameters of the fractionally cointegrated system we obtain consistent estimates and provide their asymptotic distributions. The cointegration matrix is asymptotically mixed...
Experiments in optimizing simulations of the subsurface chlorophyll maximum in the South China Sea
Wang, Siying; Li, Shiyu; Hu, Jiatang; Geng, Bingxu
2016-04-01
The subsurface chlorophyll maximum (SCM) is widespread in the oligotrophic ocean and significantly contributes to primary production. One reason for the SCM formation is believed to be the rapid export of phytoplankton from surface layers, which might be caused by aggregation, faster sinking rates under nutrient limitation, or the formation of a resting stage. In this study, these three processes were included in a biological model to investigate their contributions to subsurface chlorophyll. To further identify their individual effects on SCM formation, four modeling experiments were carried out. Three used a simple approach with either (a) density-dependent aggregation, (b) accelerated sinking rate of phytoplankton, or (c) a resting stage. The other experiment combined all three approaches (a-c). A set of observations in the South China Sea was used to optimize the four experiments and compare their abilities to replicate observed values. The results of the experiments with the resting stage showed the best fit to the field observations. All experiments were able to capture major features of the chlorophyll field (e.g. surface bloom and SCM). The experiment with accelerated sinking rate failed to reproduce the observed profile of particulate organic carbon. The experiment with only aggregation predicted lower chlorophyll concentrations in summer than those measured in the field, while experiments with the resting stage reproduced more accurate chlorophyll concentrations. Formulas including the resting stage more successfully captured the timing of phytoplankton export than did those including aggregation and accelerated sinking rate. The processes of aggregation and accelerated sinking rate made small contributions to the SCM formation in the last experiment. Overall, these results show that introducing the resting stage improves SCM simulations of the South China Sea. The results of the experiment with only the resting stage showed that the resting cells shift
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Estimation of subsurface geomodels by multi-objective stochastic optimization
Emami Niri, Mohammad; Lumley, David E.
2016-06-01
We present a new method to estimate subsurface geomodels using a multi-objective stochastic search technique that allows a variety of direct and indirect measurements to simultaneously constrain the earth model. Inherent uncertainties and noise in real data measurements may result in conflicting geological and geophysical datasets for a given area; a realistic earth model can then only be produced by combining the datasets in a defined optimal manner. One approach to solving this problem is by joint inversion of the various geological and/or geophysical datasets, and estimating an optimal model by optimizing a weighted linear combination of several separate objective functions which compare simulated and observed datasets. In the present work, we consider the joint inversion of multiple datasets for geomodel estimation, as a multi-objective optimization problem in which separate objective functions for each subset of the observed data are defined, followed by an unweighted simultaneous stochastic optimization to find the set of best compromise model solutions that fits the defined objectives, along the so-called "Pareto front". We demonstrate that geostatistically constrained initializations of the algorithm improves convergence speed and produces superior geomodel solutions. We apply our method to a 3D reservoir lithofacies model estimation problem which is constrained by a set of geological and geophysical data measurements and attributes, and assess the sensitivity of the resulting geomodels to changes in the parameters of the stochastic optimization algorithm and the presence of realistic seismic noise conditions.
Bias Correction for Alternating Iterative Maximum Likelihood Estimators
Gang YU; Wei GAO; Ningzhong SHI
2013-01-01
In this paper,we give a definition of the alternating iterative maximum likelihood estimator (AIMLE) which is a biased estimator.Furthermore we adjust the AIMLE to result in asymptotically unbiased and consistent estimators by using a bootstrap iterative bias correction method as in Kuk (1995).Two examples and simulation results reported illustrate the performance of the bias correction for AIMLE.
Maximum Likelihood Estimation of the Identification Parameters and Its Correction
无
2002-01-01
By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
2009-01-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed s...
Penalized maximum likelihood estimation and variable selection in geostatistics
Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919
2012-01-01
We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR
SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM
1994-01-01
In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the est
Smoothed log-concave maximum likelihood estimation with applications
Chen, Yining
2011-01-01
We study the smoothed log-concave maximum likelihood estimator of a probability distribution on $\\mathbb{R}^d$. This is a fully automatic nonparametric density estimator, obtained as a canonical smoothing of the log-concave maximum likelihood estimator. We demonstrate its attractive features both through an analysis of its theoretical properties and a simulation study. Moreover, we show how the estimator can be used as an intermediate stage of more involved procedures, such as constructing a classifier or estimating a functional of the density. Here again, the use of the estimator can be justified both on theoretical grounds and through its finite sample performance, and we illustrate its use in a breast cancer diagnosis (classification) problem.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Kenneth W. K. Lui
2009-01-01
Full Text Available We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Lui, Kenneth W. K.; So, H. C.
2009-12-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Orlov A. I.
2015-05-01
Full Text Available According to the new paradigm of applied mathematical statistics one should prefer non-parametric methods and models. However, in applied statistics we currently use a variety of parametric models. The term "parametric" means that the probabilistic-statistical model is fully described by a finite-dimensional vector of fixed dimension, and this dimension does not depend on the size of the sample. In parametric statistics the estimation problem is to estimate the unknown value (for statistician of parameter by means of the best (in some sense method. In the statistical problems of standardization and quality control we use a three-parameter family of gamma distributions. In this article, it is considered as an example of the parametric distribution family. We compare the methods for estimating the parameters. The method of moments is universal. However, the estimates obtained with the help of method of moments have optimal properties only in rare cases. Maximum likelihood estimation (MLE belongs to the class of the best asymptotically normal estimates. In most cases, analytical solutions do not exist; therefore, to find MLE it is necessary to apply numerical methods. However, the use of numerical methods creates numerous problems. Convergence of iterative algorithms requires justification. In a number of examples of the analysis of real data, the likelihood function has many local maxima, and because of that natural iterative procedures do not converge. We suggest the use of one-step estimates (OS-estimates. They have equally good asymptotic properties as the maximum likelihood estimators, under the same conditions of regularity that MLE. One-step estimates are written in the form of explicit formulas. In this article it is proved that the one-step estimates are the best asymptotically normal estimates (under natural conditions. We have found OS-estimates for the gamma distribution and given the results of calculations using data on operating time
Beat the Deviations in Estimating Maximum Power of Thermoelectric Modules
Gao, Junling; Chen, Min
2013-01-01
Under a certain temperature difference, the maximum power of a thermoelectric module can be estimated by the open-circuit voltage and the short-circuit current. In practical measurement, there exist two switch modes, either from open to short or from short to open, but the two modes can give...... different estimations on the maximum power. Using TEG-127-2.8-3.5-250 and TEG-127-1.4-1.6-250 as two examples, the difference is about 10%, leading to some deviations with the temperature change. This paper analyzes such differences by means of a nonlinear numerical model of thermoelectricity, and finds out...
Maximum likelihood estimation of phase-type distributions
Esparza, Luz Judith R
This work is concerned with the statistical inference of phase-type distributions and the analysis of distributions with rational Laplace transform, known as matrix-exponential distributions. The thesis is focused on the estimation of the maximum likelihood parameters of phase-type distributions ...
Maximum Likelihood Estimation of Nonlinear Structural Equation Models.
Lee, Sik-Yum; Zhu, Hong-Tu
2002-01-01
Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)
Maximum likelihood estimation of the attenuated ultrasound pulse
Rasmussen, Klaus Bolding
1994-01-01
The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated...
Extracting volatility signal using maximum a posteriori estimation
Neto, David
2016-11-01
This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.
Maximum-likelihood fits to histograms for improved parameter estimation
Fowler, Joseph W
2013-01-01
Straightforward methods for adapting the familiar chi^2 statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K-alpha fluorescence spectrum, a poor choice of chi^2 can lead to biases of at least 10% in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for chi^2 minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.
Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation.
Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng; Wang, Meng
2016-09-20
A new algorithm called maximum correntropy unscented Kalman filter (MCUKF) is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF) provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC), the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT) is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.
Evaluating maximum likelihood estimation methods to determine the hurst coefficients
Kendziorski, C. M.; Bassingthwaighte, J. B.; Tonellato, P. J.
1999-12-01
A maximum likelihood estimation method implemented in S-PLUS ( S-MLE) to estimate the Hurst coefficient ( H) is evaluated. The Hurst coefficient, with 0.5long memory time series by quantifying the rate of decay of the autocorrelation function. S-MLE was developed to estimate H for fractionally differenced (fd) processes. However, in practice it is difficult to distinguish between fd processes and fractional Gaussian noise (fGn) processes. Thus, the method is evaluated for estimating H for both fd and fGn processes. S-MLE gave biased results of H for fGn processes of any length and for fd processes of lengths less than 2 10. A modified method is proposed to correct for this bias. It gives reliable estimates of H for both fd and fGn processes of length greater than or equal to 2 11.
Parameter estimation in X-ray astronomy using maximum likelihood
Wachter, K.; Leach, R.; Kellogg, E.
1979-01-01
Methods of estimation of parameter values and confidence regions by maximum likelihood and Fisher efficient scores starting from Poisson probabilities are developed for the nonlinear spectral functions commonly encountered in X-ray astronomy. It is argued that these methods offer significant advantages over the commonly used alternatives called minimum chi-squared because they rely on less pervasive statistical approximations and so may be expected to remain valid for data of poorer quality. Extensive numerical simulations of the maximum likelihood method are reported which verify that the best-fit parameter value and confidence region calculations are correct over a wide range of input spectra.
Bayesian and maximum likelihood estimation of genetic maps
York, Thomas L.; Durrett, Richard T.; Tanksley, Steven;
2005-01-01
There has recently been increased interest in the use of Markov Chain Monte Carlo (MCMC)-based Bayesian methods for estimating genetic maps. The advantage of these methods is that they can deal accurately with missing data and genotyping errors. Here we present an extension of the previous methods...... that makes the Bayesian method applicable to large data sets. We present an extensive simulation study examining the statistical properties of the method and comparing it with the likelihood method implemented in Mapmaker. We show that the Maximum A Posteriori (MAP) estimator of the genetic distances...
Maximum-likelihood estimation prevents unphysical Mueller matrices
Aiello, A; Voigt, D; Woerdman, J P
2005-01-01
We show that the method of maximum-likelihood estimation, recently introduced in the context of quantum process tomography, can be applied to the determination of Mueller matrices characterizing the polarization properties of classical optical systems. Contrary to linear reconstruction algorithms, the proposed method yields physically acceptable Mueller matrices even in presence of uncontrolled experimental errors. We illustrate the method on the case of an unphysical measured Mueller matrix taken from the literature.
Maximum Entropy Estimation of Transition Probabilities of Reversible Markov Chains
Erik Van der Straeten
2009-11-01
Full Text Available In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems to illustrate the theoretical ideas. The examples studied in this paper are: the Ising model, the Potts model and the Blume-Emery-Griffiths model.
Estimating landscape carrying capacity through maximum clique analysis.
Donovan, Therese M; Warrington, Gregory S; Schwenk, W Scott; Dinitz, Jeffrey H
2012-12-01
Habitat suitability (HS) maps are widely used tools in wildlife science and establish a link between wildlife populations and landscape pattern. Although HS maps spatially depict the distribution of optimal resources for a species, they do not reveal the population size a landscape is capable of supporting--information that is often crucial for decision makers and managers. We used a new approach, "maximum clique analysis," to demonstrate how HS maps for territorial species can be used to estimate the carrying capacity, N(k), of a given landscape. We estimated the N(k) of Ovenbirds (Seiurus aurocapillus) and bobcats (Lynx rufus) in an 1153-km2 study area in Vermont, USA. These two species were selected to highlight different approaches in building an HS map as well as computational challenges that can arise in a maximum clique analysis. We derived 30-m2 HS maps for each species via occupancy modeling (Ovenbird) and by resource utilization modeling (bobcats). For each species, we then identified all pixel locations on the map (points) that had sufficient resources in the surrounding area to maintain a home range (termed a "pseudo-home range"). These locations were converted to a mathematical graph, where any two points were linked if two pseudo-home ranges could exist on the landscape without violating territory boundaries. We used the program Cliquer to find the maximum clique of each graph. The resulting estimates of N(k) = 236 Ovenbirds and N(k) = 42 female bobcats were sensitive to different assumptions and model inputs. Estimates of N(k) via alternative, ad hoc methods were 1.4 to > 30 times greater than the maximum clique estimate, suggesting that the alternative results may be upwardly biased. The maximum clique analysis was computationally intensive but could handle problems with < 1500 total pseudo-home ranges (points). Given present computational constraints, it is best suited for species that occur in clustered distributions (where the problem can be
Maximum-likelihood estimation of circle parameters via convolution.
Zelniker, Emanuel E; Clarkson, I Vaughan L
2006-04-01
The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images.
Maximum Entropy Estimation of n-Year Extreme Waveheights
徐德伦; 张军; 郑桂珍
2004-01-01
A new method for estimating the n (50 or 100) -year return-period waveheight, namely, the extreme waveheightexpected to occur in n years, is presented on the basis of the maximum entropy principle. The main points of the method are as follows: ( 1 ) based on the Hamiltonian principle, a maximum entropy probability density function for the extreme waveheight H, f(H)= αHγe-βΗ4 is derived from a Lagrangian function subject to some necessary and rational constraints; (2) the parametersα,β, andγin the function are expressed in terms of the mean H, variance V = ( H - H)2and bias B = ( H- H)3; and (3) with H, V and B estimated from observed data, the n-year return-period wave height Hn is computed in accordance with the formula 1/1 - F(Hn) = n, where F(Hn) is defined as F(Hn) =n Hn Of(H)dH.Examples of estimating the 50 and 100-year retum period waveheights by the present method and by some currently used method from observed data acquired from two hydrographic stations are given. A comparison of the estimated results shows that the present method is superior to the others.
A Maximum-Entropy Method for Estimating the Spectrum
无
2007-01-01
Based on the maximum-entropy (ME) principle, a new power spectral estimator for random waves is derived in the form of ~S(ω)=(a/8)-H2(2π)d+1ω-(d+2)exp[-b(2π/ω)n], by solving a variational problem subject to some quite general constraints. This robust method is comprehensive enough to describe the wave spectra even in extreme wave conditions and is superior to periodogram method that is not suitable to process comparatively short or intensively unsteady signals for its tremendous boundary effect and some inherent defects of FFT. Fortunately, the newly derived method for spectral estimation works fairly well, even though the sample data sets are very short and unsteady, and the reliability and efficiency of this spectral estimator have been preliminarily proved.
Estimating the maximum potential revenue for grid connected electricity storage :
Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.
2012-12-01
The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the
Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation
Xi Liu
2016-09-01
Full Text Available A new algorithm called maximum correntropy unscented Kalman filter (MCUKF is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC, the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.
Relative azimuth inversion by way of damped maximum correlation estimates
Ringler, A.T.; Edwards, J.D.; Hutt, C.R.; Shelly, F.
2012-01-01
Horizontal seismic data are utilized in a large number of Earth studies. Such work depends on the published orientations of the sensitive axes of seismic sensors relative to true North. These orientations can be estimated using a number of different techniques: SensOrLoc (Sensitivity, Orientation and Location), comparison to synthetics (Ekstrom and Busby, 2008), or by way of magnetic compass. Current methods for finding relative station azimuths are unable to do so with arbitrary precision quickly because of limitations in the algorithms (e.g. grid search methods). Furthermore, in order to determine instrument orientations during station visits, it is critical that any analysis software be easily run on a large number of different computer platforms and the results be obtained quickly while on site. We developed a new technique for estimating relative sensor azimuths by inverting for the orientation with the maximum correlation to a reference instrument, using a non-linear parameter estimation routine. By making use of overlapping windows, we are able to make multiple azimuth estimates, which helps to identify the confidence of our azimuth estimate, even when the signal-to-noise ratio (SNR) is low. Finally, our algorithm has been written as a stand-alone, platform independent, Java software package with a graphical user interface for reading and selecting data segments to be analyzed.
Maximum likelihood estimation for semiparametric density ratio model.
Diao, Guoqing; Ning, Jing; Qin, Jing
2012-06-27
In the statistical literature, the conditional density model specification is commonly used to study regression effects. One attractive model is the semiparametric density ratio model, under which the conditional density function is the product of an unknown baseline density function and a known parametric function containing the covariate information. This model has a natural connection with generalized linear models and is closely related to biased sampling problems. Despite the attractive features and importance of this model, most existing methods are too restrictive since they are based on multi-sample data or conditional likelihood functions. The conditional likelihood approach can eliminate the unknown baseline density but cannot estimate it. We propose efficient estimation procedures based on the nonparametric likelihood. The nonparametric likelihood approach allows for general forms of covariates and estimates the regression parameters and the baseline density simultaneously. Therefore, the nonparametric likelihood approach is more versatile than the conditional likelihood approach especially when estimation of the conditional mean or other quantities of the outcome is of interest. We show that the nonparametric maximum likelihood estimators are consistent, asymptotically normal, and asymptotically efficient. Simulation studies demonstrate that the proposed methods perform well in practical settings. A real example is used for illustration.
tmle : An R Package for Targeted Maximum Likelihood Estimation
Susan Gruber
2012-11-01
Full Text Available Targeted maximum likelihood estimation (TMLE is a general approach for constructing an efficient double-robust semi-parametric substitution estimator of a causal effect parameter or statistical association measure. tmle is a recently developed R package that implements TMLE of the effect of a binary treatment at a single point in time on an outcome of interest, controlling for user supplied covariates, including an additive treatment effect, relative risk, odds ratio, and the controlled direct effect of a binary treatment controlling for a binary intermediate variable on the pathway from treatment to the out- come. Estimation of the parameters of a marginal structural model is also available. The package allows outcome data with missingness, and experimental units that contribute repeated records of the point-treatment data structure, thereby allowing the analysis of longitudinal data structures. Relevant factors of the likelihood may be modeled or fit data-adaptively according to user specifications, or passed in from an external estimation procedure. Effect estimates, variances, p values, and 95% confidence intervals are provided by the software.
Mean square convergence rates for maximum quasi-likelihood estimator
Arnoud V. den Boer
2015-03-01
Full Text Available In this note we study the behavior of maximum quasilikelihood estimators (MQLEs for a class of statistical models, in which only knowledge about the first two moments of the response variable is assumed. This class includes, but is not restricted to, generalized linear models with general link function. Our main results are related to guarantees on existence, strong consistency and mean square convergence rates of MQLEs. The rates are obtained from first principles and are stronger than known a.s. rates. Our results find important application in sequential decision problems with parametric uncertainty arising in dynamic pricing.
MAXIMUM LIKELIHOOD ESTIMATION IN GENERALIZED GAMMA TYPE MODEL
Vinod Kumar
2010-01-01
Full Text Available In the present paper, the maximum likelihood estimates of the two parameters of ageneralized gamma type model have been obtained directly by solving the likelihood equationsas well as by reparametrizing the model first and then solving the likelihood equations (as doneby Prentice, 1974 for fixed values of the third parameter. It is found that reparametrization doesneither reduce the bulk nor the complexity of calculations. as claimed by Prentice (1974. Theprocedure has been illustrated with the help of an example. The distribution of MLE of q alongwith its properties has also been obtained.
Approximated maximum likelihood estimation in multifractal random walks
Løvsletten, Ola
2011-01-01
We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry et al., Phys. Rev. E 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the R computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.
Kiviet, J.F.; Phillips, G.D.A.
2014-01-01
In dynamic regression models conditional maximum likelihood (least-squares) coefficient and variance estimators are biased. Using expansion techniques an approximation is obtained to the bias in variance estimation yielding a bias corrected variance estimator. This is achieved for both the standard
Marginal Maximum Likelihood Estimation of Item Response Models in R
Matthew S. Johnson
2007-02-01
Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.
Analytical maximum likelihood estimation of stellar magnetic fields
González, M J Martínez; Ramos, A Asensio; Belluzzi, L
2011-01-01
The polarised spectrum of stellar radiation encodes valuable information on the conditions of stellar atmospheres and the magnetic fields that permeate them. In this paper, we give explicit expressions to estimate the magnetic field vector and its associated error from the observed Stokes parameters. We study the solar case where specific intensities are observed and then the stellar case, where we receive the polarised flux. In this second case, we concentrate on the explicit expression for the case of a slow rotator with a dipolar magnetic field geometry. Moreover, we also give explicit formulae to retrieve the magnetic field vector from the LSD profiles without assuming mean values for the LSD artificial spectral line. The formulae have been obtained assuming that the spectral lines can be described in the weak field regime and using a maximum likelihood approach. The errors are recovered by means of the hermitian matrix. The bias of the estimators are analysed in depth.
Louis de Grange
2010-09-01
Full Text Available Maximum entropy models are often used to describe supply and demand behavior in urban transportation and land use systems. However, they have been criticized for not representing behavioral rules of system agents and because their parameters seems to adjust only to modeler-imposed constraints. In response, it is demonstrated that the solution to the entropy maximization problem with linear constraints is a multinomial logit model whose parameters solve the likelihood maximization problem of this probabilistic model. But this result neither provides a microeconomic interpretation of the entropy maximization problem nor explains the equivalence of these two optimization problems. This work demonstrates that an analysis of the dual of the entropy maximization problem yields two useful alternative explanations of its solution. The first shows that the maximum entropy estimators of the multinomial logit model parameters reproduce rational user behavior, while the second shows that the likelihood maximization problem for multinomial logit models is the dual of the entropy maximization problem.
Trumbo, S. K.; Palacios, S. L.; Zimmerman, R. C.; Kudela, R. M.
2012-12-01
Macrocystis pyrifera, giant kelp, is a major primary producer of the California coastal ocean that provides habitat for marine species through the formation of massive kelp beds. The estimation of primary productivity of these kelp beds is essential for a complete understanding of their health and of the biogeochemistry of the region. Current methods involve either the application of a proportionality constant to remotely sensed biomass or in situ frond density measurements. The purpose of this research was to improve upon conventional primary productivity estimates by developing a model which takes into account the spectral differences among juvenile, mature, and senescent tissues as well as the photosynthetic contributions of subsurface kelp. A modified version of a seagrass productivity model (Zimmerman 2006) was used to quantify carbon fixation. Inputs included estimates of the underwater light field as computed by solving the radiative transfer equation (with the Hydrolight(TM) software package) and biological parameters obtained from the literature. It was found that mature kelp is the most efficient primary producer, especially in light-limited environments, due to increased light absorptance. It was also found that incoming light attenuates below useful levels for photosynthesis more rapidly than has been previously accounted for in productivity estimates, with productivity dropping below half maximum at approximately 0.75 m. As a case study for comparison with the biomass method, the model was applied to Isla Vista kelp bed in Santa Barbara, using area estimates from the MODIS-ASTER Simulator (MASTER). A graphical user-interface was developed for users to provide inputs to run the kelp productivity model under varying conditions. Accurately quantifying kelp productivity is essential for understanding its interaction with offshore ecosystems as well as its contribution to the coastal carbon cycle.
Site Specific Probable Maximum Precipitation Estimates and Professional Judgement
Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.
2015-12-01
State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially
Accelerated maximum likelihood parameter estimation for stochastic biochemical systems
Daigle Bernie J
2012-05-01
Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods
Maximum-likelihood estimation of haplotype frequencies in nuclear families.
Becker, Tim; Knapp, Michael
2004-07-01
The importance of haplotype analysis in the context of association fine mapping of disease genes has grown steadily over the last years. Since experimental methods to determine haplotypes on a large scale are not available, phase has to be inferred statistically. For individual genotype data, several reconstruction techniques and many implementations of the expectation-maximization (EM) algorithm for haplotype frequency estimation exist. Recent research work has shown that incorporating available genotype information of related individuals largely increases the precision of haplotype frequency estimates. We, therefore, implemented a highly flexible program written in C, called FAMHAP, which calculates maximum likelihood estimates (MLEs) of haplotype frequencies from general nuclear families with an arbitrary number of children via the EM-algorithm for up to 20 SNPs. For more loci, we have implemented a locus-iterative mode of the EM-algorithm, which gives reliable approximations of the MLEs for up to 63 SNP loci, or less when multi-allelic markers are incorporated into the analysis. Missing genotypes can be handled as well. The program is able to distinguish cases (haplotypes transmitted to the first affected child of a family) from pseudo-controls (non-transmitted haplotypes with respect to the child). We tested the performance of FAMHAP and the accuracy of the obtained haplotype frequencies on a variety of simulated data sets. The implementation proved to work well when many markers were considered and no significant differences between the estimates obtained with the usual EM-algorithm and those obtained in its locus-iterative mode were observed. We conclude from the simulations that the accuracy of haplotype frequency estimation and reconstruction in nuclear families is very reliable in general and robust against missing genotypes.
Penalized maximum likelihood estimation for generalized linear point processes
Hansen, Niels Richard
2010-01-01
A generalized linear point process is specified in terms of an intensity that depends upon a linear predictor process through a fixed non-linear function. We present a framework where the linear predictor is parametrized by a Banach space and give results on Gateaux differentiability of the log-likelihood....... Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we derive results on the representation of the penalized maximum likelihood estimator in a special case and the gradient...... of the negative log-likelihood in general. The latter is used to develop a descent algorithm in the Sobolev space. We conclude the paper by extensions to multivariate and additive model specifications. The methods are implemented in the R-package ppstat....
X. Gong
2014-06-01
Full Text Available A bell-shape vertical profile of chlorophyll a (Chl a concentration, conventionally referred as Subsurface Chlorophyll Maximum (SCM phenomenon, has frequently been observed in stratified oceans and lakes. This profile is assumed to be a general Gaussian distribution in this study. By substituting the general Gaussian function into ecosystem dynamical equations, the steady-state solutions for SCM characteristics (i.e. SCM layer depth, thickness, and intensity in various scenarios are derived. These solutions indicate that: (1 The maximum in Chl a concentrations occurs at or below the depth with the maximum in growth rates of phytoplankton locating at the transition from nutrient limitation to light limitation, and the depth of SCM layer deepens logarithmically with an increase in surface light intensity; (2 The shape of SCM layer (thickness and intensity is mainly influenced by nutrient supply, but independence of surface light intensity; (3 The intensity of SCM layer is proportional to the diffusive flux of nutrient from below, getting stronger as a result of this layer being shrank by a higher light attenuation coefficient or a larger sinking velocity of phytoplankton. The analytical solutions can be useful to estimate environmental parameters difficultly obtained from on-site observations.
Rizzo, R. E.; Healy, D.; De Siena, L.
2015-12-01
The success of any model prediction is largely dependent on the accuracy with which its parameters are known. In characterising fracture networks in naturally fractured rocks, the main issues are related with the difficulties in accurately up- and down-scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture attributes (fracture lengths, apertures, orientations and densities) represents a fundamental step which can aid the estimation of permeability and fluid flow, which are of primary importance in a number of contexts ranging from hydrocarbon production in fractured reservoirs and reservoir stimulation by hydrofracturing, to geothermal energy extraction and deeper Earth systems, such as earthquakes and ocean floor hydrothermal venting. This work focuses on linking fracture data collected directly from outcrops to permeability estimation and fracture network modelling. Outcrop studies can supplement the limited data inherent to natural fractured systems in the subsurface. The study area is a highly fractured upper Miocene biosiliceous mudstone formation cropping out along the coastline north of Santa Cruz (California, USA). These unique outcrops exposes a recently active bitumen-bearing formation representing a geological analogue of a fractured top seal. In order to validate field observations as useful analogues of subsurface reservoirs, we describe a methodology of statistical analysis for more accurate probability distribution of fracture attributes, using Maximum Likelihood Estimators. These procedures aim to understand whether the average permeability of a fracture network can be predicted reducing its uncertainties, and if outcrop measurements of fracture attributes can be used directly to generate statistically identical fracture network models.
On a robust and efficient maximum depth estimator
ZUO YiJun; LAI ShaoYong
2009-01-01
The best breakdown point robustness is one of the most outstanding features of the univariate median. For this robustness property, the median, however, has to pay the price of a low efficiency at normal and other light-tailed models. Affine equivariant multivariate analogues of the univariate median with high breakdown points were constructed in the past two decades. For the high breakdown robustness, most of them also have to sacrifice their efficiency at normal and other models,nevertheless. The affine equivariant maximum depth estimator proposed and studied in this paper turns out to be an exception. Like the univariate median, it also possesses a highest breakdown point among all its multivariate competitors. Unlike the univariate median, it is also highly efficient relative to the sample mean at normal and various other distributions, overcoming the vital low-efficiency shortcoming of the univariate and other multivariate generalized medians. The paper also studies the asymptotics of the estimator and establishes its limit distribution without symmetry and other strong assumptions that are typically imposed on the underlying distribution.
Maximum likelihood estimation for cytogenetic dose-response curves
Frome, E.L; DuFrain, R.J.
1983-10-01
In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa(..gamma..d + g(t, tau)d/sup 2/), where t is the time and d is dose. The coefficient of the d/sup 2/ term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.
Maximum likelihood sequence estimation for optical complex direct modulation.
Che, Di; Yuan, Feng; Shieh, William
2017-04-17
Semiconductor lasers are versatile optical transmitters in nature. Through the direct modulation (DM), the intensity modulation is realized by the linear mapping between the injection current and the light power, while various angle modulations are enabled by the frequency chirp. Limited by the direct detection, DM lasers used to be exploited only as 1-D (intensity or angle) transmitters by suppressing or simply ignoring the other modulation. Nevertheless, through the digital coherent detection, simultaneous intensity and angle modulations (namely, 2-D complex DM, CDM) can be realized by a single laser diode. The crucial technique of CDM is the joint demodulation of intensity and differential phase with the maximum likelihood sequence estimation (MLSE), supported by a closed-form discrete signal approximation of frequency chirp to characterize the MLSE transition probability. This paper proposes a statistical method for the transition probability to significantly enhance the accuracy of the chirp model. Using the statistical estimation, we demonstrate the first single-channel 100-Gb/s PAM-4 transmission over 1600-km fiber with only 10G-class DM lasers.
Local solutions of Maximum Likelihood Estimation in Quantum State Tomography
Gonçalves, Douglas S; Lavor, Carlile; Farías, Osvaldo Jiménez; Ribeiro, P H Souto
2011-01-01
Maximum likelihood estimation is one of the most used methods in quantum state tomography, where the aim is to find the best density matrix for the description of a physical system. Results of measurements on the system should match the expected values produced by the density matrix. In some cases however, if the matrix is parameterized to ensure positivity and unit trace, the negative log-likelihood function may have several local minima. In several papers in the field, authors associate a source of errors to the possibility that most of these local minima are not global, so that optimization methods can be trapped in the wrong minimum, leading to a wrong density matrix. Here we show that, for convex negative log-likelihood functions, all local minima are global. We also show that a practical source of errors is in fact the use of optimization methods that do not have global convergence property or present numerical instabilities. The clarification of this point has important repercussion on quantum informat...
Physics-based estimates of maximum magnitude of induced earthquakes
Ampuero, Jean-Paul; Galis, Martin; Mai, P. Martin
2016-04-01
In this study, we present new findings when integrating earthquake physics and rupture dynamics into estimates of maximum magnitude of induced seismicity (Mmax). Existing empirical relations for Mmax lack a physics-based relation between earthquake size and the characteristics of the triggering stress perturbation. To fill this gap, we extend our recent work on the nucleation and arrest of dynamic ruptures derived from fracture mechanics theory. There, we derived theoretical relations between the area and overstress of overstressed asperity and the ability of ruptures to either stop spontaneously (sub-critical ruptures) or runaway (super-critical ruptures). These relations were verified by comparison with simulation and laboratory results, namely 3D dynamic rupture simulations on faults governed by slip-weakening friction, and laboratory experiments of frictional sliding nucleated by localized stresses. Here, we apply and extend these results to situations that are representative for the induced seismicity environment. We present physics-based predictions of Mmax on a fault intersecting cylindrical reservoir. We investigate Mmax dependence on pore-pressure variations (by varying reservoir parameters), frictional parameters and stress conditions of the fault. We also derive Mmax as a function of injected volume. Our approach provides results that are consistent with observations but suggests different scaling with injected volume than that of empirical relation by McGarr, 2014.
Morelli, Agnese; Bruno, Luigi; Cleveland, David M.; Drexler, Tina M.; Amorosi, Alessandro
2017-10-01
Paleosols are commonly used to reconstruct ancient landscapes and past environmental conditions. Through identification and subsurface mapping of two pedogenically modified surfaces formed at the onset of the Last Glacial Maximum (LGM) and during the Younger Dryas (YD) cold event, respectively, and based on their lateral correlation with coeval channel-belt sand bodies, we assessed the geomorphic processes affecting the Po coastal plain during the Late Pleistocene (30-11.5 cal ky BP). The 3D-reconstruction of the LGM and YD paleosurfaces provides insight into the paleolandscapes that developed in the Po alluvial plain at the transitions between warm and cold climate periods. The LGM paleosol records a stratigraphic hiatus of approximately 5 kyr (29-24 cal ky BP), whereas the development of the YD paleosol was associated with a climatic episode of significantly shorter duration. Both paleosols, dissected by Apennine rivers flowing from the south, dip towards the north-east, where they are replaced by fluvial channel belts fed by the Po River. The LGM channel-belt sand body reflects the protracted lateral migration of the Po River at the onset of the glacial maximum. It is wider (> 24 km) and thicker ( 15 m) of the fluvial sand body formed during the YD. The northern margin of LGM Po channel-belt deposits was not encountered in the study area. In contrast, a spatially restricted paleosol, identified in the north at the same elevation as the southern plateau, may represent local expression of the Alpine interfluve during the YD event. This study highlights how 3D-mapping of regionally extensive, weakly developed paleosols can be used to assess the geomorphic response of an alluvial system to rapid climate change.
Estimate of the maximum induced magnetic field in relativistic shocks
Ghorbanalilu, M.; Sadegzadeh, S.
2017-01-01
The proton-driven Weibel instability is a crucial process for amplifying the generated magnetic fields in gamma-ray bursts. An expression for the saturation level of magnetic fields is estimated in a relativistic shock consisting of electron-proton plasmas. Within the shock transition layer, the plasma is modelled with the waterbag and Maxwell-Jüttner distribution functions for asymmetric counter-propagating proton beams and isotropic background electrons, respectively. The proton-driven Weibel-type instability in the linear phase is investigated thoroughly and then the instability conditions and the stabilization mechanisms are considered in details just after the shutdown of the electron Weibel instability. The growth rate of the instability and the saturated magnetic field strength are obtained in terms of the effective proton beam Mach number, asymmetry parameter, and the background electron temperature. In this paper, fully relativistic kinetic treatment is used to formulate the dispersion relation for the proton Weibel-type instability. Then, by using the magnetic trapping criteria, the saturated magnetic field strength is computed. In the present scenario, the instability includes two stages: in the first stage the electron Weibel instability evolves very rapidly, but in the second one because of the free energy stored in the slow counter-propagating proton beams, the instability is further amplified in the context of electrons with an isotropic distribution function. Increment of the growth rate and saturated magnetic field by increasing (decreasing) the effective proton beam Mach number (the asymmetry parameter) is deduced from the results. It is shown that at the temperatures around 108 K a maximum magnetic field up to around 56 G can be detected by this mechanism after the saturation time.
Marasek, K; Nowicki, A
1994-01-01
The performance of three spectral techniques (FFT, AR Burg and ARMA) for maximum frequency estimation of the Doppler spectra is described. Different definitions of fmax were used: frequency at which spectral power decreases down to 0.1 of its maximum value, modified threshold crossing method (MTCM) and novel geometrical method. "Goodness" and efficiency of estimators were determined by calculating the bias and the standard deviation of the estimated maximum frequency of the simulated Doppler spectra with known statistics. The power of analysed signals was assumed to have the exponential distribution function. The SNR ratios were changed over the range from 0 to 20 dB. Different spectrum envelopes were generated. A Gaussian envelope approximated narrow band spectral processes (P. W. Doppler) and rectangular spectra were used to simulate a parabolic flow insonified with C. W. Doppler. The simulated signals were generated out of 3072-point records with sampling frequency of 20 kHz. The AR and ARMA models order selections were done independently according to Akaike Information Criterion (AIC) and Singular Value Decomposition (SVD). It was found that the ARMA model, computed according to SVD criterion, had the best overall performance and produced results with the smallest bias and standard deviation. In general AR(SVD) was better than AR(AIC). The geometrical method of fmax estimation was found to be more accurate than other tested methods, especially for narrow band signals.
GaoChunwen; XuJingzhen; RichardSinding-Larsen
2005-01-01
A Bayesian approach using Markov chain Monte Carlo algorithms has been developed to analyze Smith's discretized version of the discovery process model. It avoids the problems involved in the maximum likelihood method by effectively making use of the information from the prior distribution and that from the discovery sequence according to posterior probabilities. All statistical inferences about the parameters of the model and total resources can be quantified by drawing samples directly from the joint posterior distribution. In addition, statistical errors of the samples can be easily assessed and the convergence properties can be monitored during the sampling. Because the information contained in a discovery sequence is not enough to estimate all parameters, especially the number of fields, geologically justified prior information is crucial to the estimation. The Bayesian approach allows the analyst to specify his subjective estimates of the required parameters and his degree of uncertainty about the estimates in a clearly identified fashion throughout the analysis. As an example, this approach is applied to the same data of the North Sea on which Smith demonstrated his maximum likelihood method. For this case, the Bayesian approach has really improved the overly pessimistic results and downward bias of the maximum likelihood procedure.
Nagano, K.; Sato, K. [Muroran Institute of Technology, Hokkaido (Japan); Niitsuma, H. [Tohoku University, Sendai (Japan)
1996-05-01
This paper reports experiments carried out to estimate subsurface-fracture orientation with the three-component crack-wave measurement. The experiments were performed by using existing subsurface cracks and two wells in the experimental field. An air gun as a sound source was installed directly above a subsurface crack intersection in one of the wells, and a three-component elastic wave detector was fixed in the vicinity of a subsurface crack intersection in the other well. Crack waves from the sound source were measured in a frequency bandwidth from 150 to 300 Hz. A coherence matrix was constituted relative to triaxial components of vibration in the crack waves; a coherent vector was sought that corresponds to a maximum coherent value of the matrix; and the direction of the longer axis in an ellipse (the direction being perpendicular to the crack face) was approximated in particle motions of the crack waves by using the vector. The normal line direction of the crack face estimated by using the above method was found to agree nearly well with the direction of the minimum crust compression stress measured in the normal line direction of the crack face existed in core samples collected from the wells, and measured at nearly the same position as the subsurface crack. 5 refs., 4 figs.
Moderate deviations of maximum likelihood estimators under alternatives
Inglot, T.; Kallenberg, W.C.M.
2000-01-01
Since statistical models are simplifications of reality, it is important in estimation theory to study the behavior of estimators also under distributions (slightly) different from the proposed model. In testing theory, when dealing with test statistics where nuisance parameters are estimated,
Maximum Likelihood Blood Velocity Estimator Incorporating Properties of Flow Physics
Schlaikjer, Malene; Jensen, Jørgen Arendt
2004-01-01
The aspect of correlation among the blood velocities in time and space has not received much attention in previous blood velocity estimators. The theory of fluid mechanics predicts this property of the blood flow. Additionally, most estimators based on a cross-correlation analysis are limited...... of simulated and in vivo data from the carotid artery. The estimator is meant for two-dimensional (2-D) color flow imaging. The resulting mathematical relation for the estimator consists of two terms. The first term performs a cross-correlation analysis on the signal segment in the radio frequency (RF......)-data under investigation. The flow physic properties are exploited in the second term, as the range of velocity values investigated in the cross-correlation analysis are compared to the velocity estimates in the temporal and spatial neighborhood of the signal segment under investigation. The new estimator...
Kurylyk, Barret L.; Irvine, Dylan J.
2016-02-01
This study details the derivation and application of a new analytical solution to the one-dimensional, transient conduction-advection equation that is applied to trace vertical subsurface fluid fluxes. The solution employs a flexible initial condition that allows for nonlinear temperature-depth profiles, providing a key improvement over most previous solutions. The boundary condition is composed of any number of superimposed step changes in surface temperature, and thus it accommodates intermittent warming and cooling periods due to long-term changes in climate or land cover. The solution is verified using an established numerical model of coupled groundwater flow and heat transport. A new computer program FAST (Flexible Analytical Solution using Temperature) is also presented to facilitate the inversion of this analytical solution to estimate vertical groundwater flow. The program requires surface temperature history (which can be estimated from historic climate data), subsurface thermal properties, a present-day temperature-depth profile, and reasonable initial conditions. FAST is written in the Python computing language and can be run using a free graphical user interface. Herein, we demonstrate the utility of the analytical solution and FAST using measured subsurface temperature and climate data from the Sendia Plain, Japan. Results from these illustrative examples highlight the influence of the chosen initial and boundary conditions on estimated vertical flow rates.
ESTIMATION OF NEAR SUBSURFACE COAL FIRE GAS EMISSIONS BASED ON GEOPHYSICAL INVESTIGATIONS
Chen-Brauchler, D.; Meyer, U.; Schlömer, S.; Kus, J.; Gundelach, V.; Wuttke, M.; Fischer, C.; Rueter, H.
2009-12-01
Spontaneous and industrially caused subsurface coal fires are worldwide disasters that destroy coal resources, cause air pollution and emit a large amount of green house gases. Especially in developing countries, such as China, India and Malaysia, this problem has intensified over the last 15 years. In China alone, 10 to 20 million tons of coal are believed to be lost in uncontrolled coal fires. The cooperation of developing countries and industrialized countries is needed to enforce internationally concerted approaches and political attention towards the problem. The Clean Development Mechanism (CDM) under the framework of the Kyoto Protocol may provide an international stage for financial investment needed to fight the disastrous situation. A Sino-German research project for coal fire exploration, monitoring and extinction applied several geophysical approaches in order to estimate the annual baseline especially of CO2 emissions from near subsurface coal fires. As a result of this project, we present verifiable methodologies that may be used in the CDM framework to estimate the amount of CO2 emissions from near subsurface coal fires. We developed three possibilities to approach the estimation based on (1) thermal energy release, (2) geological and geometrical determinations as well as (3) direct gas measurement. The studies involve the investigation of the physical property changes of the coal seam and bedrock during different burning stages of a underground coal fire. Various geophysical monitoring methods were applied from near surface to determine the coal volume, fire propagation, temperature anomalies, etc.
Closed form maximum likelihood estimator of conditional random fields
Zhu, Zhemin; Hiemstra, Djoerd; Apers, Peter M.G.; Wombacher, Andreas
2013-01-01
Training Conditional Random Fields (CRFs) can be very slow for big data. In this paper, we present a new training method for CRFs called {\\em Empirical Training} which is motivated by the concept of co-occurrence rate. We show that the standard training (unregularized) can have many maximum likeliho
Heteroscedastic one-factor models and marginal maximum likelihood estimation
Hessen, D.J.; Dolan, C.V.
2009-01-01
In the present paper, a general class of heteroscedastic one-factor models is considered. In these models, the residual variances of the observed scores are explicitly modelled as parametric functions of the one-dimensional factor score. A marginal maximum likelihood procedure for parameter estimati
Emanuele Rizzo, Roberto; Healy, David; De Siena, Luca
2016-04-01
The success of any predictive model is largely dependent on the accuracy with which its parameters are known. When characterising fracture networks in fractured rock, one of the main issues is accurately scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture attributes (lengths, apertures, orientations and densities) is fundamental to the estimation of permeability and fluid flow, which are of primary importance in a number of contexts including: hydrocarbon production from fractured reservoirs; geothermal energy extraction; and deeper Earth systems, such as earthquakes and ocean floor hydrothermal venting. Our work links outcrop fracture data to modelled fracture networks in order to numerically predict bulk permeability. We collected outcrop data from a highly fractured upper Miocene biosiliceous mudstone formation, cropping out along the coastline north of Santa Cruz (California, USA). Using outcrop fracture networks as analogues for subsurface fracture systems has several advantages, because key fracture attributes such as spatial arrangements and lengths can be effectively measured only on outcrops [1]. However, a limitation when dealing with outcrop data is the relative sparseness of natural data due to the intrinsic finite size of the outcrops. We make use of a statistical approach for the overall workflow, starting from data collection with the Circular Windows Method [2]. Then we analyse the data statistically using Maximum Likelihood Estimators, which provide greater accuracy compared to the more commonly used Least Squares linear regression when investigating distribution of fracture attributes. Finally, we estimate the bulk permeability of the fractured rock mass using Oda's tensorial approach [3]. The higher quality of this statistical analysis is fundamental: better statistics of the fracture attributes means more accurate permeability estimation, since the fracture attributes feed
Huang, Jinxin; Yuan, Qun; Tankam, Patrice; Clarkson, Eric; Kupinski, Matthew; Hindman, Holly B.; Aquavella, James V.; Rolland, Jannick P.
2015-03-01
In biophotonics imaging, one important and quantitative task is layer-thickness estimation. In this study, we investigate the approach of combining optical coherence tomography and a maximum-likelihood (ML) estimator for layer thickness estimation in the context of tear film imaging. The motivation of this study is to extend our understanding of tear film dynamics, which is the prerequisite to advance the management of Dry Eye Disease, through the simultaneous estimation of the thickness of the tear film lipid and aqueous layers. The estimator takes into account the different statistical processes associated with the imaging chain. We theoretically investigated the impact of key system parameters, such as the axial point spread functions (PSF) and various sources of noise on measurement uncertainty. Simulations show that an OCT system with a 1 μm axial PSF (FWHM) allows unbiased estimates down to nanometers with nanometer precision. In implementation, we built a customized Fourier domain OCT system that operates in the 600 to 1000 nm spectral window and achieves 0.93 micron axial PSF in corneal epithelium. We then validated the theoretical framework with physical phantoms made of custom optical coatings, with layer thicknesses from tens of nanometers to microns. Results demonstrate unbiased nanometer-class thickness estimates in three different physical phantoms.
An iterative stochastic ensemble method for parameter estimation of subsurface flow models
Elsheikh, Ahmed H.
2013-06-01
Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss-Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier Inc.
Maximum likelihood estimation for life distributions with competing failure modes
Sidik, S. M.
1979-01-01
The general model for the competing failure modes assuming that location parameters for each mode are expressible as linear functions of the stress variables and the failure modes act independently is presented. The general form of the likelihood function and the likelihood equations are derived for the extreme value distributions, and solving these equations using nonlinear least squares techniques provides an estimate of the asymptotic covariance matrix of the estimators. Monte-Carlo results indicate that, under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slightly biased, and the asymptotic covariances are rapidly approached.
A Rayleigh Doppler Frequency Estimator Derived from Maximum Likelihood Theory
Hansen, Henrik; Affes, Sofiene; Mermelstein, Paul
1999-01-01
Reliable estimates of Rayleigh Doppler frequency are useful for the optimization of adaptive multiple access wireless receivers.The adaptation parameters of such receivers are sensitive to the amount of Doppler and automatic reconfiguration to the speed of terminalmovement can optimize cell...
Maximum Likelihood Estimation in Meta-Analytic Structural Equation Modeling
Oort, Frans J.; Jak, Suzanne
2016-01-01
Meta-analytic structural equation modeling (MASEM) involves fitting models to a common population correlation matrix that is estimated on the basis of correlation coefficients that are reported by a number of independent studies. MASEM typically consist of two stages. The method that has been found to perform best in terms of statistical…
Coupling diffusion and maximum entropy models to estimate thermal inertia
Thermal inertia is a physical property of soil at the land surface related to water content. We have developed a method for estimating soil thermal inertia using two daily measurements of surface temperature, to capture the diurnal range, and diurnal time series of net radiation and specific humidi...
A Rayleigh Doppler frequency estimator derived from maximum likelihood theory
Hansen, Henrik; Affes, Sofiéne; Mermelstein, Paul
1999-01-01
Reliable estimates of Rayleigh Doppler frequency are useful for the optimization of adaptive multiple access wireless receivers. The adaptation parameters of such receivers are sensitive to the amount of Doppler and automatic reconfiguration to the speed of terminal movement can optimize cell cap...
A Maximum Information Rate Quaternion Filter for Spacecraft Attitude Estimation
Reijneveld, J.; Maas, A.; Choukroun, D.; Kuiper, J.M.
2011-01-01
Building on previous works, this paper introduces a novel continuous-time stochastic optimal linear quaternion estimator under the assumptions of rate gyro measurements and of vector observations of the attitude. A quaternion observation model, which observation matrix is rank degenerate, is reduced
Maximum estimates for generalized Forchheimer flows in heterogeneous porous media
Celik, Emine; Hoang, Luan
2017-02-01
This article continues the study in [4] of generalized Forchheimer flows in heterogeneous porous media. Such flows are used to account for deviations from Darcy's law. In heterogeneous media, the derived nonlinear partial differential equation for the pressure can be singular and degenerate in the spatial variables, in addition to being degenerate for large pressure gradient. Here we obtain the estimates for the L∞-norms of the pressure and its time derivative in terms of the initial and the time-dependent boundary data. They are established by implementing De Giorgi-Moser's iteration in the context of weighted norms with the weights specifically defined by the Forchheimer equation's coefficient functions. With these weights, we prove suitable weighted parabolic Poincaré-Sobolev inequalities and use them to facilitate the iteration. Moreover, local in time L∞-bounds are combined with uniform Gronwall-type energy inequalities to obtain long-time L∞-estimates.
On the existence of maximum likelihood estimates for presence-only data
Hefley, Trevor J.; Hooten, Mevin B.
2015-01-01
Presence-only data can be used to determine resource selection and estimate a species’ distribution. Maximum likelihood is a common parameter estimation method used for species distribution models. Maximum likelihood estimates, however, do not always exist for a commonly used species distribution model – the Poisson point process.
Murphy, Patrick Charles
1985-01-01
An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.
Comparison of two field tests to estimate maximum aerobic speed.
Berthoin, S; Gerbeaux, M; Turpin, E; Guerrin, F; Lensel-Corbeil, G; Vandendorpe, F
1994-08-01
The measurement of maximal aerobic speed (MAS) and the prediction of maximal oxygen uptake (VO2 max) by means of field tests were carried out on 17 students studying physical education. The subjects underwent a continuous multi-stage track test (Léger and Boucher, 1980), shuttle test (Léger et al., 1984) and VO2 max measurement on a treadmill. The VO2 max values estimated using the track test (56.8 +/- 5.8 ml kg-1 min-1) were not significantly different from the values measured in the treadmill test (56.8 +/- 7.1 ml kg-1 min-1), but were higher than those estimated using the shuttle test (51.1 +/- 5.9 ml kg-1 min-1). The maximal nature of the tests was checked by measurement of heart rate and lactate concentration, taken within 2 min post-test. The means of the MAS observed in the track test (15.8 +/- 1.9 km h-1) and in the treadmill test (15.9 +/- 2.6 km h-1) were not significantly different (P > 0.10). The mean of the shuttle test MAS (13.1 +/- 1 km h-1) was significantly lower (P < 0.01) than those of the other tests. However, the MAS of the shuttle test and track test are linked. The equation for linear regression between MAS values in these two tests is MAStrack = 1.81 x MASshuttle -7.86 (r = 0.91), allowing estimation of one of these MAS values when the other is known. Thus these values may be used within diversified training.
MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.
Vecchia, A.V.
1985-01-01
A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.
Estimating Metabolic Fluxes Using a Maximum Network Flexibility Paradigm
Megchelenbrink, Wout; Rossell, Sergio; Huynen, Martijn A.
2015-01-01
Motivation Genome-scale metabolic networks can be modeled in a constraint-based fashion. Reaction stoichiometry combined with flux capacity constraints determine the space of allowable reaction rates. This space is often large and a central challenge in metabolic modeling is finding the biologically most relevant flux distributions. A widely used method is flux balance analysis (FBA), which optimizes a biologically relevant objective such as growth or ATP production. Although FBA has proven to be highly useful for predicting growth and byproduct secretion, it cannot predict the intracellular fluxes under all environmental conditions. Therefore, alternative strategies have been developed to select flux distributions that are in agreement with experimental “omics” data, or by incorporating experimental flux measurements. The latter, unfortunately can only be applied to a limited set of reactions and is currently not feasible at the genome-scale. On the other hand, it has been observed that micro-organisms favor a suboptimal growth rate, possibly in exchange for a more “flexible” metabolic network. Instead of dedicating the internal network state to an optimal growth rate in one condition, a suboptimal growth rate is used, that allows for an easier switch to other nutrient sources. A small decrease in growth rate is exchanged for a relatively large gain in metabolic capability to adapt to changing environmental conditions. Results Here, we propose Maximum Metabolic Flexibility (MMF) a computational method that utilizes this observation to find the most probable intracellular flux distributions. By mapping measured flux data from central metabolism to the genome-scale models of Escherichia coli and Saccharomyces cerevisiae we show that i) indeed, most of the measured fluxes agree with a high adaptability of the network, ii) this result can be used to further reduce the space of feasible solutions iii) this reduced space improves the quantitative predictions
Estimating Metabolic Fluxes Using a Maximum Network Flexibility Paradigm.
Wout Megchelenbrink
Full Text Available Genome-scale metabolic networks can be modeled in a constraint-based fashion. Reaction stoichiometry combined with flux capacity constraints determine the space of allowable reaction rates. This space is often large and a central challenge in metabolic modeling is finding the biologically most relevant flux distributions. A widely used method is flux balance analysis (FBA, which optimizes a biologically relevant objective such as growth or ATP production. Although FBA has proven to be highly useful for predicting growth and byproduct secretion, it cannot predict the intracellular fluxes under all environmental conditions. Therefore, alternative strategies have been developed to select flux distributions that are in agreement with experimental "omics" data, or by incorporating experimental flux measurements. The latter, unfortunately can only be applied to a limited set of reactions and is currently not feasible at the genome-scale. On the other hand, it has been observed that micro-organisms favor a suboptimal growth rate, possibly in exchange for a more "flexible" metabolic network. Instead of dedicating the internal network state to an optimal growth rate in one condition, a suboptimal growth rate is used, that allows for an easier switch to other nutrient sources. A small decrease in growth rate is exchanged for a relatively large gain in metabolic capability to adapt to changing environmental conditions.Here, we propose Maximum Metabolic Flexibility (MMF a computational method that utilizes this observation to find the most probable intracellular flux distributions. By mapping measured flux data from central metabolism to the genome-scale models of Escherichia coli and Saccharomyces cerevisiae we show that i indeed, most of the measured fluxes agree with a high adaptability of the network, ii this result can be used to further reduce the space of feasible solutions iii this reduced space improves the quantitative predictions made by FBA and
Estimates of chemical compaction and maximum burial depth from bedding parallel stylolites
Gasparrini, Marta; Beaudoin, Nicolas; Lacombe, Olivier; David, Marie-Eleonore; Youssef, Souhail; Koehn, Daniel
2017-04-01
Chemical compaction is a diagenetic process affecting sedimentary series during burial that develops rough dissolution surfaces named Bedding Parallel Stylolites (BPS). BPS are related to the dissolution of important rock volumes and can lead to porosity reduction around them due to post-dissolution cementation. Our understanding of the effect of chemical compaction on rock volume and porosity evolution during basin burial is however too tight yet to be fully taken into account in basin models and thermal or fluid-flow simulations. This contribution presents a novel and multidisciplinary approach to quantify chemical compaction and to estimate maximum paleodepth of burial, applied to the Dogger carbonate reservoirs from the Paris Basin sub-surface. This succession experienced a relatively simple burial history (nearly continuous burial from Upper Jurassic to Upper Cretaceous, followed by a main uplift phase), and mainly underwent normal overburden (inducing development of BPS), escaping major tectonic stress episodes. We considered one core from the depocentre and one from the eastern margin of the basin in the same stratigraphic interval (Bathonian Sup. - Callovian Inf.; restricted lagoonal setting), and analysed the macro- and micro-facies to distinguish five main depositional environments. Type and abundance of BPS were continuously recorded along the logs and treated statistically to obtain preliminary rules relying the occurrence of the BPS as a function of the contrasting facies and burial histories. The treatment of high resolution 2D images allowed the identification and separation of the BPS to evaluate total stylolitization density and insoluble thickness as an indirect measure of the dissolved volume, with respect to the morphology of the BPS considered. Based on the morphology of the BPS roughness, we used roughness signal analysis method to reconstruct the vertical paleo-stress (paleo-depth) recorded by the BPS during chemical compaction. The
Azam Zaka
2014-10-01
Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.
Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.
2016-01-01
The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…
Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model
Roberts, James S.; Thompson, Vanessa M.
2011-01-01
A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…
Dual states estimation of a subsurface flow-transport coupled model using ensemble Kalman filtering
El Gharamti, Mohamad
2013-10-01
Modeling the spread of subsurface contaminants requires coupling a groundwater flow model with a contaminant transport model. Such coupling may provide accurate estimates of future subsurface hydrologic states if essential flow and contaminant data are assimilated in the model. Assuming perfect flow, an ensemble Kalman filter (EnKF) can be used for direct data assimilation into the transport model. This is, however, a crude assumption as flow models can be subject to many sources of uncertainty. If the flow is not accurately simulated, contaminant predictions will likely be inaccurate even after successive Kalman updates of the contaminant model with the data. The problem is better handled when both flow and contaminant states are concurrently estimated using the traditional joint state augmentation approach. In this paper, we introduce a dual estimation strategy for data assimilation into a one-way coupled system by treating the flow and the contaminant models separately while intertwining a pair of distinct EnKFs, one for each model. The presented strategy only deals with the estimation of state variables but it can also be used for state and parameter estimation problems. This EnKF-based dual state-state estimation procedure presents a number of novel features: (i) it allows for simultaneous estimation of both flow and contaminant states in parallel; (ii) it provides a time consistent sequential updating scheme between the two models (first flow, then transport); (iii) it simplifies the implementation of the filtering system; and (iv) it yields more stable and accurate solutions than does the standard joint approach. We conducted synthetic numerical experiments based on various time stepping and observation strategies to evaluate the dual EnKF approach and compare its performance with the joint state augmentation approach. Experimental results show that on average, the dual strategy could reduce the estimation error of the coupled states by 15% compared with the
Likelihood Principle and Maximum Likelihood Estimator of Location Parameter for Cauchy Distribution.
1986-05-01
consistency (or strong consistency) of maximum likelihood estimator has been studied by many researchers, for example, Wald (1949), Wolfowitz (1953, 1965...20, 595-601. [25] Wolfowitz , J. (1953). The method of maximum likelihood and Wald theory of decision functions. Indag. Math., Vol. 15, 114-119. [26...Probability Letters Vol. 1, No. 3, 197-202. [24] Wald , A. (1949). Note on the consistency of maximum likelihood estimates. Ann. Math. Statist., Vol
Monaghan, Alison A
2017-12-01
Over significant areas of the UK and western Europe, anthropogenic alteration of the subsurface by mining of coal has occurred beneath highly populated areas which are now considering a multiplicity of 'low carbon' unconventional energy resources including shale gas and oil, coal bed methane, geothermal energy and energy storage. To enable decision making on the 3D planning, licensing and extraction of these resources requires reduced uncertainty around complex geology and hydrogeological and geomechanical processes. An exemplar from the Carboniferous of central Scotland, UK, illustrates how, in areas lacking hydrocarbon well production data and 3D seismic surveys, legacy coal mine plans and associated boreholes provide valuable data that can be used to reduce the uncertainty around geometry and faulting of subsurface energy resources. However, legacy coal mines also limit unconventional resource volumes since mines and associated shafts alter the stress and hydrogeochemical state of the subsurface, commonly forming pathways to the surface. To reduce the risk of subsurface connections between energy resources, an example of an adapted methodology is described for shale gas/oil resource estimation to include a vertical separation or 'stand-off' zone between the deepest mine workings, to ensure the hydraulic fracturing required for shale resource production would not intersect legacy coal mines. Whilst the size of such separation zones requires further work, developing the concept of 3D spatial separation and planning is key to utilising the crowded subsurface energy system, whilst mitigating against resource sterilisation and environmental impacts, and could play a role in positively informing public and policy debate. Copyright © 2017 British Geological Survey, a component institute of NERC. Published by Elsevier B.V. All rights reserved.
Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors
Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi
2013-01-01
Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...
A new model for estimating subsurface ice content based on combined electrical and seismic data sets
C. Hauck
2011-06-01
Full Text Available Detailed knowledge of the material properties and internal structures of frozen ground is one of the prerequisites in many permafrost studies. In the absence of direct evidence, such as in-situ borehole measurements, geophysical methods are an increasingly interesting option for obtaining subsurface information on various spatial and temporal scales. The indirect nature of geophysical soundings requires a relation between the measured variables (e.g. electrical resistivity, seismic velocity and the actual subsurface constituents (rock, water, air, ice. In this work, we present a model which provides estimates of the volumetric fractions of these four constituents from tomographic electrical and seismic images. The model is tested using geophysical data sets from two rock glaciers in the Swiss Alps, where ground truth information in form of borehole data is available. First results confirm the applicability of the so-called 4-phase model, which allows to quantify the contributions of ice-, water- and air within permafrost areas as well as detecting solid bedrock. Apart from a similarly thick active layer with enhanced air content for both rock glaciers, the two case studies revealed a heterogeneous distribution of ice and unfrozen water within Muragl rock glacier, where bedrock was detected at depths of 20–25 m, but a comparatively homogeneous ice body with only minor heterogeneities within Murtèl rock glacier.
Maximum Likelihood Estimation of Time-Varying Loadings in High-Dimensional Factor Models
Mikkelsen, Jakob Guldbæk; Hillebrand, Eric; Urga, Giovanni
In this paper, we develop a maximum likelihood estimator of time-varying loadings in high-dimensional factor models. We specify the loadings to evolve as stationary vector autoregressions (VAR) and show that consistent estimates of the loadings parameters can be obtained by a two-step maximum...... likelihood estimation procedure. In the first step, principal components are extracted from the data to form factor estimates. In the second step, the parameters of the loadings VARs are estimated as a set of univariate regression models with time-varying coefficients. We document the finite...
Ait-El-Fquih, Boujemaa; El Gharamti, Mohamad; Hoteit, Ibrahim
2016-08-01
Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface groundwater models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model's state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The joint EnKF directly updates the augmented state-parameter vector, whereas the dual EnKF empirically employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. To develop a Bayesian consistent dual approach and improve the state-parameter estimates and their consistency, we propose in this paper a one-step-ahead (OSA) smoothing formulation of the state-parameter Bayesian filtering problem from which we derive a new dual-type EnKF, the dual EnKFOSA. Compared with the standard dual EnKF, it imposes a new update step to the state, which is shown to enhance the performance of the dual approach with almost no increase in the computational cost. Numerical experiments are conducted with a two-dimensional (2-D) synthetic groundwater aquifer model to investigate the performance and robustness of the proposed dual EnKFOSA, and to evaluate its results against those of the joint and dual EnKFs. The proposed scheme is able to successfully recover both the hydraulic head and the aquifer conductivity, providing further reliable estimates of their uncertainties. Furthermore, it is found to be more robust to different assimilation settings, such as the spatial and temporal distribution of the observations, and the level of noise in the data. Based on our experimental setups, it yields up to 25 % more accurate state and parameter estimations than the joint and dual approaches.
Ait-El-Fquih, Boujemaa
2016-08-12
Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface ground-water models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model\\'s state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The joint EnKF directly updates the augmented state-parameter vector, whereas the dual EnKF empirically employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. To develop a Bayesian consistent dual approach and improve the state-parameter estimates and their consistency, we propose in this paper a one-step-ahead (OSA) smoothing formulation of the state-parameter Bayesian filtering problem from which we derive a new dual-type EnKF, the dual EnKF(OSA). Compared with the standard dual EnKF, it imposes a new update step to the state, which is shown to enhance the performance of the dual approach with almost no increase in the computational cost. Numerical experiments are conducted with a two-dimensional (2-D) synthetic groundwater aquifer model to investigate the performance and robustness of the proposed dual EnKFOSA, and to evaluate its results against those of the joint and dual EnKFs. The proposed scheme is able to successfully recover both the hydraulic head and the aquifer conductivity, providing further reliable estimates of their uncertainties. Furthermore, it is found to be more robust to different assimilation settings, such as the spatial and temporal distribution of the observations, and the level of noise in the data. Based on our experimental setups, it yields up to 25% more accurate state and parameter estimations than the joint and dual approaches.
Asymptotic properties of maximum likelihood estimators in models with multiple change points
He, Heping; 10.3150/09-BEJ232
2011-01-01
Models with multiple change points are used in many fields; however, the theoretical properties of maximum likelihood estimators of such models have received relatively little attention. The goal of this paper is to establish the asymptotic properties of maximum likelihood estimators of the parameters of a multiple change-point model for a general class of models in which the form of the distribution can change from segment to segment and in which, possibly, there are parameters that are common to all segments. Consistency of the maximum likelihood estimators of the change points is established and the rate of convergence is determined; the asymptotic distribution of the maximum likelihood estimators of the parameters of the within-segment distributions is also derived. Since the approach used in single change-point models is not easily extended to multiple change-point models, these results require the introduction of those tools for analyzing the likelihood function in a multiple change-point model.
Ait-El-Fquih, Boujemaa; Hoteit, Ibrahim
2015-01-01
Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface groundwater models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model's state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The Joint-EnKF directly updates the augmented state-parameter vector while the Dual-EnKF employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. In this paper, we reverse the order of the forecast-update steps following the one-step-ahead (OSA) smoothing formulation of the Bayesian filtering problem, based on which we propose a new dual EnKF scheme, the Dual-EnKF$_{\\rm OSA}$. Compared to the Dual-EnKF, this introduces a new update step to the state in a fully consistent Bayesian framework...
Silver, Jeremy D; Ritchie, Matthew E; Smyth, Gordon K
2009-01-01
is developed for exact maximum likelihood estimation (MLE) using high-quality optimization software and using the saddle-point estimates as starting values. "MLE" is shown to outperform heuristic estimators proposed by other authors, both in terms of estimation accuracy and in terms of performance on real data....... The saddle-point approximation is an adequate replacement in most practical situations. The performance of normexp for assessing differential expression is improved by adding a small offset to the corrected intensities....
Parameter Estimation for an Electric Arc Furnace Model Using Maximum Likelihood
Jesser J. Marulanda-Durango
2012-12-01
Full Text Available In this paper, we present a methodology for estimating the parameters of a model for an electrical arc furnace, by using maximum likelihood estimation. Maximum likelihood estimation is one of the most employed methods for parameter estimation in practical settings. The model for the electrical arc furnace that we consider, takes into account the non-periodic and non-linear variations in the voltage-current characteristic. We use NETLAB, an open source MATLAB® toolbox, for solving a set of non-linear algebraic equations that relate all the parameters to be estimated. Results obtained through simulation of the model in PSCADTM, are contrasted against real measurements taken during the furnance's most critical operating point. We show how the model for the electrical arc furnace, with appropriate parameter tuning, captures with great detail the real voltage and current waveforms generated by the system. Results obtained show a maximum error of 5% for the current's root mean square error.
L. M. Miller
2010-09-01
Full Text Available The availability of wind power for renewable energy extraction is ultimately limited by how much kinetic energy is generated by natural processes within the Earth system and by fundamental limits of how much of the wind power can be extracted. Here we use these considerations to provide a maximum estimate of wind power availability over land. We use three different methods. First, we use simple, established estimates of the energetics of the atmospheric circulation, which yield about 38 TW of wind power available for extraction. Second, we set up a simple momentum balance model to estimate maximum extractability which we then apply to reanalysis climate data, yielding an estimate of 17 TW. Finally, we perform climate model simulations in which we extract different amounts of momentum from the atmospheric boundary layer to obtain a maximum estimate of how much power can be extracted, yielding 36 TW. These three methods consistently yield maximum estimates in the range of 17–38 TW and are notably less than recent estimates that claim abundant wind power availability. Furthermore, we show with the climate model simulations that the climatic effects at maximum wind power extraction are similar in magnitude to those associated with a doubling of atmospheric CO_{2}. We conclude that in order to understand fundamental limits to renewable energy resources, as well as the impacts of their utilization, it is imperative to use a thermodynamic, Earth system perspective, rather than engineering specifications of the latest technology.
GA-BASED MAXIMUM POWER DISSIPATION ESTIMATION OF VLSI SEQUENTIAL CIRCUITS OF ARBITRARY DELAY MODELS
Lu Junming; Lin Zhenghui
2002-01-01
In this paper, the glitching activity and process variations in the maximum power dissipation estimation of CMOS circuits are introduced. Given a circuit and the gate library,a new Genetic Algorithm (GA)-based technique is developed to determine the maximum power dissipation from a statistical point of view. The simulation on ISCAS-89 benchmarks shows that the ratio of the maximum power dissipation with glitching activity over the maximum power under zero-delay model ranges from 1.18 to 4.02. Compared with the traditional Monte Carlo-based technique, the new approach presented in this paper is more effective.
GA—BASED MAXIMUM POWER DISSIPATION ESTIMATION OF VLSI SEQUENTIAL CIRCUITS OF ARBITRARY DELAY MODELS
LuJunming; LinZhenghui
2002-01-01
In this paper,the glitching activity and process variations in the maximum power dissipation estimation of CMOS circulits are introduced.Given a circuit and the gate library,a new Genetic Algorithm (GA)-based technique is developed to determine the maximum power dissipation from a statistical point of view.The simulation on ISCAS-89 benchmarks shows that the ratio of the maximum power dissipation with glitching activity over the maximum power under zero-delay model ranges from 1.18 to 4.02.Compared with the traditional Monte Carlo-based technique,the new approach presented in this paper is more effective.
Weighted Maximum-a-Posteriori Estimation in Tests Composed of Dichotomous and Polytomous Items
Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong
2012-01-01
For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…
Maximum likelihood PSD estimation for speech enhancement in reverberant and noisy conditions
Kuklasinski, Adam; Doclo, Simon; Jensen, Jesper
2016-01-01
We propose a novel Power Spectral Density (PSD) estimator for multi-microphone systems operating in reverberant and noisy conditions. The estimator is derived using the maximum likelihood approach and is based on a blocked and pre-whitened additive signal model. The intended application......, the difference between algorithms was found to be statistically significant only in some of the experimental conditions....
On the Loss of Information in Conditional Maximum Likelihood Estimation of Item Parameters.
Eggen, Theo J. H. M.
2000-01-01
Shows that the concept of F-information, a generalization of Fisher information, is a useful took for evaluating the loss of information in conditional maximum likelihood (CML) estimation. With the F-information concept it is possible to investigate the conditions under which there is no loss of information in CML estimation and to quantify a loss…
Casabianca, Jodi M.; Lewis, Charles
2015-01-01
Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…
Enders, Craig K.
2001-01-01
Examined the performance of a recently available full information maximum likelihood (FIML) estimator in a multiple regression model with missing data using Monte Carlo simulation and considering the effects of four independent variables. Results indicate that FIML estimation was superior to that of three ad hoc techniques, with less bias and less…
Weighted Maximum-a-Posteriori Estimation in Tests Composed of Dichotomous and Polytomous Items
Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong
2012-01-01
For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…
Chave, Alan D.
2017-08-01
The robust statistical model of a Gaussian core contaminated by outlying data in use since the 1980s, and which underlies modern estimation of the magnetotelluric (MT) response function, is re-examined from first principles. The residuals from robust estimators applied to MT data are shown to be systematically long-tailed compared to a distribution based on the Gaussian and hence inconsistent with the robust model. Instead, MT data are pervasively described by the stable distribution family for which the Gaussian is an end member, but whose remaining distributions have algebraic rather than exponential tails. The validity of the stable model is rigorously demonstrated using a permutation test. A maximum likelihood estimator (MLE), including the use of a remote reference, that exploits the stable nature of MT data is formulated, and its two-stage implementation, in which stable parameters are first fit to the residuals, and then the MT responses are solved for, with iteration between them, is described. The MLE is inherently robust, but differs from a conventional robust estimator because it is based on a statistical model derived from the data rather than being ad hoc. Finally, the covariance matrices obtained from MT data are pervasively improper as a result of weak non-stationarity, and the Cramér-Rao lower bound for the improper covariance matrix is derived, resulting in reliable second-order statistics for MT responses. The stable MLE was applied to an exemplar broadband data set from northwest Namibia. The stable MLE is shown to be consistent with the statistical model underlying linear regression and hence is unconditionally unbiased, in contrast to the robust model. The MLE is compared to conventional robust remote reference and two-stage estimators, establishing that the standard errors of the former are systematically smaller than for either of the latter, and that the standardized differences between them exhibit excursions that are both too frequent and
De Vos, Paul; Wu, Qiang
2015-01-01
We employ a parameter-free distribution estimation framework where estimators are random distributions and utilize the Kullback–Leibler (KL) divergence as a loss function. Wu and Vos [ J. Statist. Plann. Inference 142 (2012) 1525–1536] show that when an estimator obtained from an i.i.d. sample is viewed as a random distribution, the KL risk of the estimator decomposes in a fashion parallel to the mean squared error decomposition when the estimator is a real-valued random variable. In th...
Second order pseudo-maximum likelihood estimation and conditional variance misspecification
Lejeune, Bernard
1997-01-01
In this paper, we study the behavior of second order pseudo-maximum likelihood estimators under conditional variance misspecification. We determine sufficient and essentially necessary conditions for such a estimator to be, regardless of the conditional variance (mis)specification, consistent for the mean parameters when the conditional mean is correctly specified. These conditions implie that, even if mean and variance parameters vary independently, standard PML2 estimators are generally not...
Sex-Specific Equations to Estimate Maximum Oxygen Uptake in Cycle Ergometry
Christina G. de Souza e Silva; Araújo,Claudio Gil S.
2015-01-01
Abstract Background: Aerobic fitness, assessed by measuring VO2max in maximum cardiopulmonary exercise testing (CPX) or by estimating VO2max through the use of equations in exercise testing, is a predictor of mortality. However, the error resulting from this estimate in a given individual can be high, affecting clinical decisions. Objective: To determine the error of estimate of VO2max in cycle ergometry in a population attending clinical exercise testing laboratories, and to propose sex-spec...
Hong, Hunsop; Schonfeld, Dan
2008-06-01
In this paper, we propose a maximum-entropy expectation-maximization (MEEM) algorithm. We use the proposed algorithm for density estimation. The maximum-entropy constraint is imposed for smoothness of the estimated density function. The derivation of the MEEM algorithm requires determination of the covariance matrix in the framework of the maximum-entropy likelihood function, which is difficult to solve analytically. We, therefore, derive the MEEM algorithm by optimizing a lower-bound of the maximum-entropy likelihood function. We note that the classical expectation-maximization (EM) algorithm has been employed previously for 2-D density estimation. We propose to extend the use of the classical EM algorithm for image recovery from randomly sampled data and sensor field estimation from randomly scattered sensor networks. We further propose to use our approach in density estimation, image recovery and sensor field estimation. Computer simulation experiments are used to demonstrate the superior performance of the proposed MEEM algorithm in comparison to existing methods.
An Entropy-Based Propagation Speed Estimation Method for Near-Field Subsurface Radar Imaging
Pistorius Stephen
2010-01-01
Full Text Available During the last forty years, Subsurface Radar (SR has been used in an increasing number of noninvasive/nondestructive imaging applications, ranging from landmine detection to breast imaging. To properly assess the dimensions and locations of the targets within the scan area, SR data sets have to be reconstructed. This process usually requires the knowledge of the propagation speed in the medium, which is usually obtained by performing an offline measurement from a representative sample of the materials that form the scan region. Nevertheless, in some novel near-field SR scenarios, such as Microwave Wood Inspection (MWI and Breast Microwave Radar (BMR, the extraction of a representative sample is not an option due to the noninvasive requirements of the application. A novel technique to determine the propagation speed of the medium based on the use of an information theory metric is proposed in this paper. The proposed method uses the Shannon entropy of the reconstructed images as the focal quality metric to generate an estimate of the propagation speed in a given scan region. The performance of the proposed algorithm was assessed using data sets collected from experimental setups that mimic the dielectric contrast found in BMI and MWI scenarios. The proposed method yielded accurate results and exhibited an execution time in the order of seconds.
Blind Joint Maximum Likelihood Channel Estimation and Data Detection for SIMO Systems
Sheng Chen; Xiao-Chen Yang; Lei Chen; Lajos Hanzo
2007-01-01
A blind adaptive scheme is proposed for joint maximum likelihood (ML) channel estimation and data detection of singleinput multiple-output (SIMO) systems. The joint ML optimisation over channel and data is decomposed into an iterative optimisation loop. An efficient global optimisation algorithm called the repeated weighted boosting search is employed at the upper level to optimally identify the unknown SIMO channel model, and the Viterbi algorithm is used at the lower level to produce the maximum likelihood sequence estimation of the unknown data sequence. A simulation example is used to demonstrate the effectiveness of this joint ML optimisation scheme for blind adaptive SIMO systems.
Gupta, N. K.; Mehra, R. K.
1974-01-01
This paper discusses numerical aspects of computing maximum likelihood estimates for linear dynamical systems in state-vector form. Different gradient-based nonlinear programming methods are discussed in a unified framework and their applicability to maximum likelihood estimation is examined. The problems due to singular Hessian or singular information matrix that are common in practice are discussed in detail and methods for their solution are proposed. New results on the calculation of state sensitivity functions via reduced order models are given. Several methods for speeding convergence and reducing computation time are also discussed.
Estimation of bias errors in measured airplane responses using maximum likelihood method
Klein, Vladiaslav; Morgan, Dan R.
1987-01-01
A maximum likelihood method is used for estimation of unknown bias errors in measured airplane responses. The mathematical model of an airplane is represented by six-degrees-of-freedom kinematic equations. In these equations the input variables are replaced by their measured values which are assumed to be without random errors. The resulting algorithm is verified with a simulation and flight test data. The maximum likelihood estimates from in-flight measured data are compared with those obtained by using a nonlinear-fixed-interval-smoother and an extended Kalmar filter.
Maximum Likelihood PSD Estimation for Speech Enhancement in Reverberation and Noise
Kuklasinski, Adam; Doclo, Simon; Jensen, Søren Holdt
2016-01-01
In this contribution we focus on the problem of power spectral density (PSD) estimation from multiple microphone signals in reverberant and noisy environments. The PSD estimation method proposed in this paper is based on the maximum likelihood (ML) methodology. In particular, we derive a novel ML...... PSD estimation scheme that is suitable for sound scenes which besides speech and reverberation consist of an additional noise component whose second-order statistics are known. The proposed algorithm is shown to outperform an existing similar algorithm in terms of PSD estimation accuracy. Moreover...
Trujillo, B. M.
1986-01-01
This paper presents the technique and results of maximum likelihood estimation used to determine lift and drag characteristics of the Space Shuttle Orbiter. Maximum likelihood estimation uses measurable parameters to estimate nonmeasurable parameters. The nonmeasurable parameters for this case are elements of a nonlinear, dynamic model of the orbiter. The estimated parameters are used to evaluate a cost function that computes the differences between the measured and estimated longitudinal parameters. The case presented is a dynamic analysis. This places less restriction on pitching motion and can provide additional information about the orbiter such as lift and drag characteristics at conditions other than trim, instrument biases, and pitching moment characteristics. In addition, an output of the analysis is an estimate of the values for the individual components of lift and drag that contribute to the total lift and drag. The results show that maximum likelihood estimation is a useful tool for analysis of Space Shuttle Orbiter performance and is also applicable to parameter analysis of other types of aircraft.
L. M. Miller
2011-02-01
Full Text Available The availability of wind power for renewable energy extraction is ultimately limited by how much kinetic energy is generated by natural processes within the Earth system and by fundamental limits of how much of the wind power can be extracted. Here we use these considerations to provide a maximum estimate of wind power availability over land. We use several different methods. First, we outline the processes associated with wind power generation and extraction with a simple power transfer hierarchy based on the assumption that available wind power will not geographically vary with increased extraction for an estimate of 68 TW. Second, we set up a simple momentum balance model to estimate maximum extractability which we then apply to reanalysis climate data, yielding an estimate of 21 TW. Third, we perform general circulation model simulations in which we extract different amounts of momentum from the atmospheric boundary layer to obtain a maximum estimate of how much power can be extracted, yielding 18–34 TW. These three methods consistently yield maximum estimates in the range of 18–68 TW and are notably less than recent estimates that claim abundant wind power availability. Furthermore, we show with the general circulation model simulations that some climatic effects at maximum wind power extraction are similar in magnitude to those associated with a doubling of atmospheric CO_{2}. We conclude that in order to understand fundamental limits to renewable energy resources, as well as the impacts of their utilization, it is imperative to use a "top-down" thermodynamic Earth system perspective, rather than the more common "bottom-up" engineering approach.
Donato, David I.
2012-01-01
This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.
A Fast Algorithm for Maximum Likelihood-based Fundamental Frequency Estimation
Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom
2015-01-01
Print Request Permissions Periodic signals are encountered in many applications. Such signals can be modelled by a weighted sum of sinusoidal components whose frequencies are integer multiples of a fundamental frequency. Given a data set, the fundamental frequency can be estimated in many ways...... including a maximum likelihood (ML) approach. Unfortunately, the ML estimator has a very high computational complexity, and the more inaccurate, but faster correlation-based estimators are therefore often used instead. In this paper, we propose a fast algorithm for the evaluation of the ML cost function...... for complex-valued data over all frequencies on a Fourier grid and up to a maximum model order. The proposed algorithm significantly reduces the computational complexity to a level not far from the complexity of the popular harmonic summation method which is an approximate ML estimator....
Singh, Harpreet; Arvind; Dorai, Kavita, E-mail: kavita@iisermohali.ac.in
2016-09-07
Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation. - Highlights: • State estimation using maximum likelihood method was performed on an NMR quantum information processor. • Physically valid density matrices were obtained every time in contrast to standard quantum state tomography. • Density matrices of several different entangled and separable states were reconstructed for two and three qubits.
Houle, D; Meyer, K
2015-08-01
We explore the estimation of uncertainty in evolutionary parameters using a recently devised approach for resampling entire additive genetic variance-covariance matrices (G). Large-sample theory shows that maximum-likelihood estimates (including restricted maximum likelihood, REML) asymptotically have a multivariate normal distribution, with covariance matrix derived from the inverse of the information matrix, and mean equal to the estimated G. This suggests that sampling estimates of G from this distribution can be used to assess the variability of estimates of G, and of functions of G. We refer to this as the REML-MVN method. This has been implemented in the mixed-model program WOMBAT. Estimates of sampling variances from REML-MVN were compared to those from the parametric bootstrap and from a Bayesian Markov chain Monte Carlo (MCMC) approach (implemented in the R package MCMCglmm). We apply each approach to evolvability statistics previously estimated for a large, 20-dimensional data set for Drosophila wings. REML-MVN and MCMC sampling variances are close to those estimated with the parametric bootstrap. Both slightly underestimate the error in the best-estimated aspects of the G matrix. REML analysis supports the previous conclusion that the G matrix for this population is full rank. REML-MVN is computationally very efficient, making it an attractive alternative to both data resampling and MCMC approaches to assessing confidence in parameters of evolutionary interest. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.
Campbell, D A; Chkrebtii, O
2013-12-01
Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories.
Moore, S K; Hunter, W C J; Furenlid, L.R.; Barrett, H. H.
2007-01-01
We present a simple 3D event position-estimation method using raw list-mode acquisition and maximum-likelihood estimation in a modular gamma camera with a thick (25mm) monolithic scintillation crystal. This method involves measuring 2D calibration scans with a well-collimated 511 keV source and fitting each point to a simple depth-dependent light distribution model. Preliminary results show that angled collimated beams appear properly reconstructed.
Analysis of Minute Features in Speckled Imagery with Maximum Likelihood Estimation
Alejandro C. Frery
2004-12-01
Full Text Available This paper deals with numerical problems arising when performing maximum likelihood parameter estimation in speckled imagery using small samples. The noise that appears in images obtained with coherent illumination, as is the case of sonar, laser, ultrasound-B, and synthetic aperture radar, is called speckle, and it can neither be assumed Gaussian nor additive. The properties of speckle noise are well described by the multiplicative model, a statistical framework from which stem several important distributions. Amongst these distributions, one is regarded as the universal model for speckled data, namely, the Ã°ÂÂ’Â¢0 law. This paper deals with amplitude data, so the Ã°ÂÂ’Â¢A0 distribution will be used. The literature reports that techniques for obtaining estimates (maximum likelihood, based on moments and on order statistics of the parameters of the Ã°ÂÂ’Â¢A0 distribution require samples of hundreds, even thousands, of observations in order to obtain sensible values. This is verified for maximum likelihood estimation, and a proposal based on alternate optimization is made to alleviate this situation. The proposal is assessed with real and simulated data, showing that the convergence problems are no longer present. A Monte Carlo experiment is devised to estimate the quality of maximum likelihood estimators in small samples, and real data is successfully analyzed with the proposed alternated procedure. Stylized empirical influence functions are computed and used to choose a strategy for computing maximum likelihood estimates that is resistant to outliers.
Performance of penalized maximum likelihood in estimation of genetic covariances matrices
Meyer Karin
2011-11-01
Full Text Available Abstract Background Estimation of genetic covariance matrices for multivariate problems comprising more than a few traits is inherently problematic, since sampling variation increases dramatically with the number of traits. This paper investigates the efficacy of regularized estimation of covariance components in a maximum likelihood framework, imposing a penalty on the likelihood designed to reduce sampling variation. In particular, penalties that "borrow strength" from the phenotypic covariance matrix are considered. Methods An extensive simulation study was carried out to investigate the reduction in average 'loss', i.e. the deviation in estimated matrices from the population values, and the accompanying bias for a range of parameter values and sample sizes. A number of penalties are examined, penalizing either the canonical eigenvalues or the genetic covariance or correlation matrices. In addition, several strategies to determine the amount of penalization to be applied, i.e. to estimate the appropriate tuning factor, are explored. Results It is shown that substantial reductions in loss for estimates of genetic covariance can be achieved for small to moderate sample sizes. While no penalty performed best overall, penalizing the variance among the estimated canonical eigenvalues on the logarithmic scale or shrinking the genetic towards the phenotypic correlation matrix appeared most advantageous. Estimating the tuning factor using cross-validation resulted in a loss reduction 10 to 15% less than that obtained if population values were known. Applying a mild penalty, chosen so that the deviation in likelihood from the maximum was non-significant, performed as well if not better than cross-validation and can be recommended as a pragmatic strategy. Conclusions Penalized maximum likelihood estimation provides the means to 'make the most' of limited and precious data and facilitates more stable estimation for multi-dimensional analyses. It should
Wu Fuxian; Wen Weidong
2016-01-01
Classic maximum entropy quantile function method (CMEQFM) based on the probabil-ity weighted moments (PWMs) can accurately estimate the quantile function of random variable on small samples, but inaccurately on the very small samples. To overcome this weakness, least square maximum entropy quantile function method (LSMEQFM) and that with constraint condition (LSMEQFMCC) are proposed. To improve the confidence level of quantile function estimation, scatter factor method is combined with maximum entropy method to estimate the confidence inter-val of quantile function. From the comparisons of these methods about two common probability distributions and one engineering application, it is showed that CMEQFM can estimate the quan-tile function accurately on the small samples but inaccurately on the very small samples (10 sam-ples); LSMEQFM and LSMEQFMCC can be successfully applied to the very small samples;with consideration of the constraint condition on quantile function, LSMEQFMCC is more stable and computationally accurate than LSMEQFM; scatter factor confidence interval estimation method based on LSMEQFM or LSMEQFMCC has good estimation accuracy on the confidence interval of quantile function, and that based on LSMEQFMCC is the most stable and accurate method on the very small samples (10 samples).
A real-time maximum-likelihood heart-rate estimator for wearable textile sensors.
Cheng, Mu-Huo; Chen, Li-Chung; Hung, Ying-Che; Yang, Chang Ming
2008-01-01
This paper presents a real-time maximum-likelihood heart-rate estimator for ECG data measured via wearable textile sensors. The ECG signals measured from wearable dry electrodes are notorious for its susceptibility to interference from the respiration or the motion of wearing person such that the signal quality may degrade dramatically. To overcome these obstacles, in the proposed heart-rate estimator we first employ the subspace approach to remove the wandering baseline, then use a simple nonlinear absolute operation to reduce the high-frequency noise contamination, and finally apply the maximum likelihood estimation technique for estimating the interval of R-R peaks. A parameter derived from the byproduct of maximum likelihood estimation is also proposed as an indicator for signal quality. To achieve the goal of real-time, we develop a simple adaptive algorithm from the numerical power method to realize the subspace filter and apply the fast-Fourier transform (FFT) technique for realization of the correlation technique such that the whole estimator can be implemented in an FPGA system. Experiments are performed to demonstrate the viability of the proposed system.
A technique for estimating maximum harvesting effort in a stochastic fishery model
Ram Rup Sarkar; J Chattopadhayay
2003-06-01
Exploitation of biological resources and the harvest of population species are commonly practiced in fisheries, forestry and wild life management. Estimation of maximum harvesting effort has a great impact on the economics of fisheries and other bio-resources. The present paper deals with the problem of a bioeconomic fishery model under environmental variability. A technique for finding the maximum harvesting effort in fluctuating environment has been developed in a two-species competitive system, which shows that under realistic environmental variability the maximum harvesting effort is less than what is estimated in the deterministic model. This method also enables us to find out the safe regions in the parametric space for which the chance of extinction of the species is minimized. A real life fishery problem has been considered to obtain the inaccessible parameters of the system in a systematic way. Such studies may help resource managers to get an idea for controlling the system.
On the Existence and Uniqueness of Maximum-Likelihood Estimates in the Rasch Model.
Fischer, Gerhard H.
1981-01-01
Necessary and sufficient conditions for the existence and uniqueness of a solution of the so-called "unconditional" and the "conditional" maximum-likelihood estimation equations in the dichotomous Rasch model are given. It is shown how to apply the results in practical uses of the Rasch model. (Author/JKS)
Uniform estimate for maximum of randomly weighted sums with applications to insurance risk theory
WANG Dingcheng; SU Chun; ZENG Yong
2005-01-01
This paper obtains the uniform estimate for maximum of sums of independent and heavy-tailed random variables with nonnegative random weights, which can be arbitrarily dependent of each other. Then the applications to ruin probabilities in a discrete time risk model with dependent stochastic returns are considered.
Jie Li DING; Xi Ru CHEN
2006-01-01
For generalized linear models (GLM), in case the regressors are stochastic and have different distributions, the asymptotic properties of the maximum likelihood estimate (MLE)(β^)n of the parameters are studied. Under reasonable conditions, we prove the weak, strong consistency and asymptotic normality of(β^)n.
Borkowski, Robert; Johannisson, Pontus; Wymeersch, Henk;
2014-01-01
We perform an experimental investigation of a maximum likelihood-based (ML-based) algorithm for bulk chromatic dispersion estimation for digital coherent receivers operating in uncompensated optical networks. We demonstrate the robustness of the method at low optical signal-to-noise ratio (OSNR) ...
Potential-scour assessments and estimates of maximum scour at selected bridges in Iowa
Fischer, E.E.
1995-01-01
The results of potential-scour assessments at 130 bridges and estimates of maximum scour at 10 bridges in Iowa are presented. All of the bridges evaluated in the study are constructed bridges (not culverts) that are sites of active or discontinued streamflow-gaging stations and peak-stage measurement sites. The period of the study was from October 1991 to September 1994.
A note on the maximum likelihood estimator in the gamma regression model
Jerzy P. Rydlewski
2009-01-01
Full Text Available This paper considers a nonlinear regression model, in which the dependent variable has the gamma distribution. A model is considered in which the shape parameter of the random variable is the sum of continuous and algebraically independent functions. The paper proves that there is exactly one maximum likelihood estimator for the gamma regression model.
Klein, Andreas G.; Muthen, Bengt O.
2007-01-01
In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…
Marginal Maximum Likelihood Estimation of a Latent Variable Model with Interaction
Cudeck, Robert; Harring, Jeffrey R.; du Toit, Stephen H. C.
2009-01-01
There has been considerable interest in nonlinear latent variable models specifying interaction between latent variables. Although it seems to be only slightly more complex than linear regression without the interaction, the model that includes a product of latent variables cannot be estimated by maximum likelihood assuming normality.…
Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models
Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai
2011-01-01
Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…
Izsak, F.
2006-01-01
A numerical maximum likelihood (ML) estimation procedure is developed for the constrained parameters of multinomial distributions. The main dif��?culty involved in computing the likelihood function is the precise and fast determination of the multinomial coef��?cients. For this the coef��?cients are
Chris B. LeDoux; John E. Baumgras; R. Bryan Selbe
1989-01-01
PROFIT-PC is a menu driven, interactive PC (personal computer) program that estimates optimum product mix and maximum net harvesting revenue based on projected product yields and stump-to-mill timber harvesting costs. Required inputs include the number of trees/acre by species and 2 inches diameter at breast-height class, delivered product prices by species and product...
Joint maximum likelihood estimation of carrier and sampling frequency offsets for OFDM systems
Kim, Y H
2010-01-01
In orthogonal-frequency division multiplexing (OFDM) systems, carrier and sampling frequency offsets (CFO and SFO, respectively) can destroy the orthogonality of the subcarriers and degrade system performance. In the literature, Nguyen-Le, Le-Ngoc, and Ko proposed a simple maximum-likelihood (ML) scheme using two long training symbols for estimating the initial CFO and SFO of a recursive least-squares (RLS) estimation scheme. However, the results of Nguyen-Le's ML estimation show poor performance relative to the Cramer-Rao bound (CRB). In this paper, we extend Moose's CFO estimation algorithm to joint ML estimation of CFO and SFO using two long training symbols. In particular, we derive CRBs for the mean square errors (MSEs) of CFO and SFO estimation. Simulation results show that the proposed ML scheme provides better performance than Nguyen-Le's ML scheme.
López-Valcarce Roberto
2004-01-01
Full Text Available We address the problem of estimating the speed of a road vehicle from its acoustic signature, recorded by a pair of omnidirectional microphones located next to the road. This choice of sensors is motivated by their nonintrusive nature as well as low installation and maintenance costs. A novel estimation technique is proposed, which is based on the maximum likelihood principle. It directly estimates car speed without any assumptions on the acoustic signal emitted by the vehicle. This has the advantages of bypassing troublesome intermediate delay estimation steps as well as eliminating the need for an accurate yet general enough acoustic traffic model. An analysis of the estimate for narrowband and broadband sources is provided and verified with computer simulations. The estimation algorithm uses a bank of modified crosscorrelators and therefore it is well suited to DSP implementation, performing well with preliminary field data.
A Sum-of-Squares and Semidefinite Programming Approach for Maximum Likelihood DOA Estimation
Shu Cai
2016-12-01
Full Text Available Direction of arrival (DOA estimation using a uniform linear array (ULA is a classical problem in array signal processing. In this paper, we focus on DOA estimation based on the maximum likelihood (ML criterion, transform the estimation problem into a novel formulation, named as sum-of-squares (SOS, and then solve it using semidefinite programming (SDP. We first derive the SOS and SDP method for DOA estimation in the scenario of a single source and then extend it under the framework of alternating projection for multiple DOA estimation. The simulations demonstrate that the SOS- and SDP-based algorithms can provide stable and accurate DOA estimation when the number of snapshots is small and the signal-to-noise ratio (SNR is low. Moveover, it has a higher spatial resolution compared to existing methods based on the ML criterion.
Estimating the Size and Timing of the Maximum Amplitude of Solar Cycle 24
Ke-Jun Li; Peng-Xin Gao; Tong-Wei Su
2005-01-01
A simple statistical method is used to estimate the size and timing of maximum amplitude of the next solar cycle (cycle 24). Presuming cycle 23 to be a short cycle (as is more likely), the minimum of cycle 24 should occur about December 2006 (±2 months) and the maximum, around March 2011 (±9 months),and the amplitude is 189.9 ± 15.5, if it is a fast riser, or about 136, if it is a slow riser. If we presume cycle 23 to be a long cycle (as is less likely), the minimum of cycle 24 should occur about June 2008 (±2 months) and the maximum, about February 2013 (±8 months) and the maximum will be about 137 or 80, according as the cycle is a fast riser or a slow riser.
Agulhas leakage into the Atlantic estimated with subsurface floats and surface drifters
Richardson, Philip L.
2007-08-01
Surface drifters and subsurface floats drifting at depths near 800 m were used to study the pathways of warm, salty Indian Ocean water leaking into the South Atlantic that is a component of the upper limb of the Atlantic meridional overturning circulation (MOC). Four drifters and 5 floats drifted from the Agulhas Current directly into the Benguela Current. Others looped for various amounts of time in Agulhas rings and cyclones, which translated westward into the Atlantic, contributing a large part of Indian Ocean leakage. Agulhas rings translated into the Benguela Current, where they slowly decayed. Some large, blob-like Agulhas rings with irregular shapes were found in the southeastern Cape Basin. Drifter trajectories suggest these rings become more circular with time, eventually evolving into the circular rings observed west of the Walvis Ridge. Agulhas cyclones, which form on the north side of the Agulhas Current south of Africa, translated southwestward (to 6°E) and contributed water to the southern Cape Basin. A new discovery is a westward extension from the mean Agulhas retroflection measured by westward drifting floats near 41°S out to at least 5°W, with some floats as far west as 25°W. The Agulhas extension appears to split the South Atlantic Current (SAC) into two branches and to transport Agulhas water westward, where it is mixed and blended with eastward-flowing water from the western Atlantic. The blended mixture flows northeastward in the northern branch of the SAC and into the Benguela Current. Agulhas leakage transport was estimated from drifters and floats to be at least 15 Sv in the upper 1000 m, which is equivalent to the transport of the upper layer MOC. It is suggested that the major component of the upper layer overturning circulation in the Atlantic is Agulhas leakage in the form of Agulhas rings.
One-repetition maximum bench press performance estimated with a new accelerometer method.
Rontu, Jari-Pekka; Hannula, Manne I; Leskinen, Sami; Linnamo, Vesa; Salmi, Jukka A
2010-08-01
The one repetition maximum (1RM) is an important method to measure muscular strength. The purpose of this study was to evaluate a new method to predict 1RM bench press performance from a submaximal lift. The developed method was evaluated by using different load levels (50, 60, 70, 80, and 90% of 1RM). The subjects were active floorball players (n = 22). The new method is based on the assumption that the estimation of 1RM can be calculated from the submaximal weight and the maximum acceleration of the submaximal weight during the lift. The submaximal bench press lift was recorded with a 3-axis accelerometer integrated to a wrist equipment and a data acquisition card. The maximum acceleration was calculated from the measurement data of the sensor and analyzed in personal computer with LabView-based software. The estimated 1RM results were compared with traditionally measured 1RM results of the subjects. An own estimation equation was developed for each load level, that is, 5 different estimation equations have been used based on the measured 1RM values of the subjects. The mean (+/-SD) of measured 1RM result was 69.86 (+/-15.72) kg. The mean of estimated 1RM values were 69.85-69.97 kg. The correlations between measured and estimated 1RM results were high (0.89-0.97; p < 0.001). The differences between the methods were very small (-0.11 to 0.01 kg) and were not significantly different from each other. The results of this study showed promising prediction accuracy for estimating bench press performance by performing just a single submaximal bench press lift. The estimation accuracy is competitive with other known estimation methods, at least with the current study population.
A maximum likelihood estimation framework for delay logistic differential equation model
Mahmoud, Ahmed Adly; Dass, Sarat Chandra; Muthuvalu, Mohana S.
2016-11-01
This paper will introduce the maximum likelihood method of estimation for delay differential equation model governed by unknown delay and other parameters of interest followed by a numerical solver approach. As an example we consider the delayed logistic differential equation. A grid based estimation framework is proposed. Our methodology estimates correctly the delay parameter as well as the initial starting value of the dynamical system based on simulation data. The computations have been carried out with help of mathematical software: MATLAB® 8.0 R2012b.
Nonlinear Random Effects Mixture Models: Maximum Likelihood Estimation via the EM Algorithm.
Wang, Xiaoning; Schumitzky, Alan; D'Argenio, David Z
2007-08-15
Nonlinear random effects models with finite mixture structures are used to identify polymorphism in pharmacokinetic/pharmacodynamic phenotypes. An EM algorithm for maximum likelihood estimation approach is developed and uses sampling-based methods to implement the expectation step, that results in an analytically tractable maximization step. A benefit of the approach is that no model linearization is performed and the estimation precision can be arbitrarily controlled by the sampling process. A detailed simulation study illustrates the feasibility of the estimation approach and evaluates its performance. Applications of the proposed nonlinear random effects mixture model approach to other population pharmacokinetic/pharmacodynamic problems will be of interest for future investigation.
Crimi, Alessandro; Lillholm, Martin; Nielsen, Mads
2011-01-01
the estimates' influence on a missing-data reconstruction task, where high resolution vertebra and cartilage models are reconstructed from incomplete and lower dimensional representations. Our results demonstrate that our methods outperform the traditional ML method and Tikhonov regularization......., and may lead to unreliable results. In this paper, we discuss regularization by prior knowledge using maximum a posteriori (MAP) estimates. We compare ML to MAP using a number of priors and to Tikhonov regularization. We evaluate the covariance estimates on both synthetic and real data, and we analyze...
Maximum likelihood estimation for Cox's regression model under nested case-control sampling
Scheike, Thomas; Juul, Anders
2004-01-01
Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...
Estimating minimum and maximum air temperature using MODIS data over Indo-Gangetic Plain
D B Shah; M R Pandya; H J Trivedi; A R Jani
2013-12-01
Spatially distributed air temperature data are required for climatological, hydrological and environmental studies. However, high spatial distribution patterns of air temperature are not available from meteorological stations due to its sparse network. The objective of this study was to estimate high spatial resolution minimum air temperature (min) and maximum air temperature (max) over the Indo-Gangetic Plain using Moderate Resolution Imaging Spectroradiometer (MODIS) data and India Meteorological Department (IMD) ground station data. min was estimated by establishing an empirical relationship between IMD min and night-time MODIS Land Surface Temperature (s). While, max was estimated using the Temperature-Vegetation Index (TVX) approach. The TVX approach is based on the linear relationship between s and Normalized Difference Vegetation Index (NDVI) data where max is estimated by extrapolating the NDVI-s regression line to maximum value of NDVImax for effective full vegetation cover. The present study also proposed a methodology to estimate NDVImax using IMD measured max for the Indo-Gangetic Plain. Comparison of MODIS estimated min with IMD measured min showed mean absolute error (MAE) of 1.73°C and a root mean square error (RMSE) of 2.2°C. Analysis in the study for max estimation showed that calibrated NDVImax performed well, with the MAE of 1.79°C and RMSE of 2.16°C.
Maziero, G C; Baunwart, C; Toledo, M C
2001-05-01
The theoretical maximum daily intakes (TMDI) of the phenolic antioxidants butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) and tertbutyl hydroquinone (TBHQ) in Brazil were estimated using food consumption data derived from a household economic survey and a packaged goods market survey. The estimates were based on maximum levels of use of the food additives specified in national food standards. The calculated intakes of the three additives for the mean consumer were below the ADIs. Estimates of TMDI for BHA, BHT and TBHQ ranged from 0.09 to 0.15, 0.05 to 0.10 and 0.07 to 0.12 mg/kg of body weight, respectively. To check if the additives are actually used at their maximum authorized levels, analytical determinations of these compounds in selected food categories were carried out using HPLC with UV detection. BHT and TBHQ concentrations in foodstuffs considered to be representive sources of these antioxidants in the diet were below the respective maximum permitted levels. BHA was not detected in any of the analysed samples. Based on the maximal approach and on the analytical data, it is unlikely that the current ADI of BHA (0.5 mg/kg body weight), BHT (0.3 mg/kg body weight) and TBHQ (0.7 mg/kg body weight) will be exceeded in practice by the average Brazilian consumer.
Indoor Ultra-Wide Band Network Adjustment using Maximum Likelihood Estimation
Koppanyi, Z.; Toth, C. K.
2014-11-01
This study is the part of our ongoing research on using ultra-wide band (UWB) technology for navigation at the Ohio State University. Our tests have indicated that the UWB two-way time-of-flight ranges under indoor circumstances follow a Gaussian mixture distribution that may be caused by the incompleteness of the functional model. In this case, to adjust the UWB network from the observed ranges, the maximum likelihood estimation (MLE) may provide a better solution for the node coordinates than the widely-used least squares approach. The prerequisite of the maximum likelihood method is to know the probability density functions. The 30 Hz sampling rate of the UWB sensors enables to estimate these functions between each node from the samples in static positioning mode. In order to prove the MLE hypothesis, an UWB network has been established in a multi-path density environment for test data acquisition. The least squares and maximum likelihood coordinate solutions are determined and compared, and the results indicate that better accuracy can be achieved with maximum likelihood estimation.
Off-Grid DOA Estimation Based on Analysis of the Convexity of Maximum Likelihood Function
LIU, Liang; WEI, Ping; LIAO, Hong Shu
Spatial compressive sensing (SCS) has recently been applied to direction-of-arrival (DOA) estimation owing to advantages over conventional ones. However the performance of compressive sensing (CS)-based estimation methods decreases when true DOAs are not exactly on the discretized sampling grid. We solve the off-grid DOA estimation problem using the deterministic maximum likelihood (DML) estimation method. In this work, we analyze the convexity of the DML function in the vicinity of the global solution. Especially under the condition of large array, we search for an approximately convex range around the ture DOAs to guarantee the DML function convex. Based on the convexity of the DML function, we propose a computationally efficient algorithm framework for off-grid DOA estimation. Numerical experiments show that the rough convex range accords well with the exact convex range of the DML function with large array and demonstrate the superior performance of the proposed methods in terms of accuracy, robustness and speed.
Maximum Likelihood Estimation and Inference With Examples in R, SAS and ADMB
Millar, Russell B
2011-01-01
This book takes a fresh look at the popular and well-established method of maximum likelihood for statistical estimation and inference. It begins with an intuitive introduction to the concepts and background of likelihood, and moves through to the latest developments in maximum likelihood methodology, including general latent variable models and new material for the practical implementation of integrated likelihood using the free ADMB software. Fundamental issues of statistical inference are also examined, with a presentation of some of the philosophical debates underlying the choice of statis
Maximum likelihood estimation of the parameters of nonminimum phase and noncausal ARMA models
Rasmussen, Klaus Bolding
1994-01-01
The well-known prediction-error-based maximum likelihood (PEML) method can only handle minimum phase ARMA models. This paper presents a new method known as the back-filtering-based maximum likelihood (BFML) method, which can handle nonminimum phase and noncausal ARMA models. The BFML method is id...... is identical to the PEML method in the case of a minimum phase ARMA model, and it turns out that the BFML method incorporates a noncausal ARMA filter with poles outside the unit circle for estimation of the parameters of a causal, nonminimum phase ARMA model...
Gusyev, Maksym; Yamazaki, Yusuke; Morgenstern, Uwe; Stewart, Mike; Kashiwaya, Kazuhisa; Hirai, Yasuyuki; Kuribayashi, Daisuke; Sawano, Hisaya
2015-04-01
The goal of this study is to estimate subsurface water transit times and volumes in headwater catchments of Hokkaido, Japan, using the New Zealand high-accuracy tritium analysis technique. Transit time provides insights into the subsurface water storage and therefore provides a robust and quick approach to quantifying the subsurface groundwater volume. Our method is based on tritium measurements in river water. Tritium is a component of meteoric water, decays with a half-life of 12.32 years, and is inert in the subsurface after the water enters the groundwater system. Therefore, tritium is ideally suited for characterization of the catchment's responses and can provide information on mean water transit times up to 200 years. Only in recent years has it become possible to use tritium for dating of stream and river water, due to the fading impact of the bomb-tritium from thermo-nuclear weapons testing, and due to improved measurement accuracy for the extremely low natural tritium concentrations. Transit time of the water discharge is one of the most crucial parameters for understanding the response of catchments and estimating subsurface water volume. While many tritium transit time studies have been conducted in New Zealand, only a limited number of tritium studies have been conducted in Japan. In addition, the meteorological, orographic and geological conditions of Hokkaido Island are similar to those in parts of New Zealand, allowing for comparison between these regions. In 2014, three field trips were conducted in Hokkaido in June, July and October to sample river water at river gauging stations operated by the Ministry of Land, Infrastructure, Transport and Tourism (MLIT). These stations have altitudes between 36 m and 860 m MSL and drainage areas between 45 and 377 km2. Each sampled point is located upstream of MLIT dams, with hourly measurements of precipitation and river water levels enabling us to distinguish between the snow melt and baseflow contributions
Robust maximum likelihood estimation for stochastic state space model with observation outliers
AlMutawa, J.
2016-08-01
The objective of this paper is to develop a robust maximum likelihood estimation (MLE) for the stochastic state space model via the expectation maximisation algorithm to cope with observation outliers. Two types of outliers and their influence are studied in this paper: namely,the additive outlier (AO) and innovative outlier (IO). Due to the sensitivity of the MLE to AO and IO, we propose two techniques for robustifying the MLE: the weighted maximum likelihood estimation (WMLE) and the trimmed maximum likelihood estimation (TMLE). The WMLE is easy to implement with weights estimated from the data; however, it is still sensitive to IO and a patch of AO outliers. On the other hand, the TMLE is reduced to a combinatorial optimisation problem and hard to implement but it is efficient to both types of outliers presented here. To overcome the difficulty, we apply the parallel randomised algorithm that has a low computational cost. A Monte Carlo simulation result shows the efficiency of the proposed algorithms. An earlier version of this paper was presented at the 8th Asian Control Conference, Kaohsiung, Taiwan, 2011.
Lee, Keunjong; Matsuno, Takeshi; Endoh, Takahiro; Ishizaka, Joji; Zhu, Yuanli; Takeda, Shigenobu; Sukigara, Chiho
2017-07-01
In summer, Changjiang Diluted Water (CDW) expands over the shelf region of the northern East China Sea. Dilution of the low salinity water could be caused by vertical mixing through the halocline. Vertical mixing through the pycnocline can transport not only saline water, but also high nutrient water from deeper layers to the surface euphotic zone. It is therefore very important to quantitatively evaluate the vertical mixing to understand the process of primary production in the CDW region. We conducted extensive measurements in the region during the period 2009-2011. Detailed investigations of the relative relationship between the subsurface chlorophyll maximum (SCM) and the nitracline suggested that there were two patterns relating to the N/P ratio. Comparing the depths of the nitracline and SCM, it was found that the SCM was usually located from 20 to 40 m and just above the nitracline, where the N/P ratio within the nitracline was below 15, whereas it was located from 10 to 30 m and within the nitracline, where the N/P ratio was above 20. The large value of the N/P ratio in the latter case suggests the influence of CDW. Turbulence measurements showed that the vertical flux of nutrients with vertical mixing was large (small) where the N/P ratio was small (large). A comparison with a time series of primary production revealed a consistency with the pattern of snapshot measurements, suggesting that the nutrient supply from the lower layer contributes considerably to the maintenance of SCM.
Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems
Hakan A. Çırpan
2002-05-01
Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.
Estimation of Maximum Allowable PV Connection to LV Residential Power Networks
Demirok, Erhan; Sera, Dezso; Teodorescu, Remus
2011-01-01
transformer or using solar inverters with new grid support features. This study presents a methodology for the estimation of maximum PV hosting capacity including IEC 60076-7 based thermal model of distribution transformer. Certain part of a real distribution network of Braedstrup suburban area in Denmark...... is used in simulation as a case study model. Furthermore, varying solutions (utilizing thermally upgraded insulation paper in transformers, reactive power services from solar inverters, etc.) are implemented on the network under investigation to examine PV penetration level and finally key results learnt......Maximum photovoltaic (PV) hosting capacity of low voltage (LV) power networks is mainly restricted by either thermal limits of network components or grid voltage quality resulted from high penetration of distributed PV systems. This maximum hosting capacity may be lower than the available solar...
The Multivariate Watson Distribution: Maximum-Likelihood Estimation and other Aspects
Sra, Suvrit
2011-01-01
This paper studies fundamental aspects of modelling data using multivariate Watson distributions. Although these distributions are natural for modelling axially symmetric data (i.e., unit vectors where $\\pm \\x$ are equivalent), for high-dimensions using them can be difficult. Why so? Largely because for Watson distributions even basic tasks such as maximum-likelihood are numerically challenging. To tackle the numerical difficulties some approximations have been derived---but these are either grossly inaccurate in high-dimensions (\\emph{Directional Statistics}, Mardia & Jupp. 2000) or when reasonably accurate (\\emph{J. Machine Learning Research, W. & C.P., v2}, Bijral \\emph{et al.}, 2007, pp. 35--42), they lack theoretical justification. We derive new approximations to the maximum-likelihood estimates; our approximations are theoretically well-defined, numerically accurate, and easy to compute. We build on our parameter estimation and discuss mixture-modelling with Watson distributions; here we uncover...
Murakami, Yuri; Ietomi, Kunihiko; Yamaguchi, Masahiro; Ohyama, Nagaaki
2007-10-01
Accurate color image reproduction under arbitrary illumination can be realized if the spectral reflectance functions in a scene are obtained. Although multispectral imaging is one of the promising methods to obtain the reflectance of a scene, it is expected to reduce the number of color channels without significant loss of accuracy. This paper presents what we believe to be a new method for estimating spectral reflectance functions from color image and multipoint spectral measurements based on maximum a posteriori (MAP) estimation. Multipoint spectral measurements are utilized as auxiliary information to improve the accuracy of spectral reflectance estimated from image data. Through simulations, it is confirmed that the proposed method improves the estimation accuracy, particularly when a scene includes subjects that belong to various categories.
Emura, Takeshi; Konno, Yoshihiko; Michimae, Hirofumi
2015-07-01
Doubly truncated data consist of samples whose observed values fall between the right- and left- truncation limits. With such samples, the distribution function of interest is estimated using the nonparametric maximum likelihood estimator (NPMLE) that is obtained through a self-consistency algorithm. Owing to the complicated asymptotic distribution of the NPMLE, the bootstrap method has been suggested for statistical inference. This paper proposes a closed-form estimator for the asymptotic covariance function of the NPMLE, which is computationally attractive alternative to bootstrapping. Furthermore, we develop various statistical inference procedures, such as confidence interval, goodness-of-fit tests, and confidence bands to demonstrate the usefulness of the proposed covariance estimator. Simulations are performed to compare the proposed method with both the bootstrap and jackknife methods. The methods are illustrated using the childhood cancer dataset.
Yatracos, Yannis G.
2013-01-01
The inherent bias pathology of the maximum likelihood (ML) estimation method is confirmed for models with unknown parameters $\\theta$ and $\\psi$ when MLE $\\hat \\psi$ is function of MLE $\\hat \\theta.$ To reduce $\\hat \\psi$'s bias the likelihood equation to be solved for $\\psi$ is updated using the model for the data $Y$ in it. Model updated (MU) MLE, $\\hat \\psi_{MU},$ often reduces either totally or partially $\\hat \\psi$'s bias when estimating shape parameter $\\psi.$ For the Pareto model $\\hat...
A New Maximum-Likelihood Change Estimator for Two-Pass SAR Coherent Change Detection.
Wahl, Daniel E.; Yocky, David A.; Jakowatz, Charles V,
2014-09-01
In this paper, we derive a new optimal change metric to be used in synthetic aperture RADAR (SAR) coherent change detection (CCD). Previous CCD methods tend to produce false alarm states (showing change when there is none) in areas of the image that have a low clutter-to-noise power ratio (CNR). The new estimator does not suffer from this shortcoming. It is a surprisingly simple expression, easy to implement, and is optimal in the maximum-likelihood (ML) sense. The estimator produces very impressive results on the CCD collects that we have tested.
Maja Olsbjerg
2015-10-01
Full Text Available Item response theory models are often applied when a number items are used to measure a unidimensional latent variable. Originally proposed and used within educational research, they are also used when focus is on physical functioning or psychological wellbeing. Modern applications often need more general models, typically models for multidimensional latent variables or longitudinal models for repeated measurements. This paper describes a SAS macro that fits two-dimensional polytomous Rasch models using a specification of the model that is sufficiently flexible to accommodate longitudinal Rasch models. The macro estimates item parameters using marginal maximum likelihood estimation. A graphical presentation of item characteristic curves is included.
Maja Olsbjerg
2015-10-01
Full Text Available Item response theory models are often applied when a number items are used to measure a unidimensional latent variable. Originally proposed and used within educational research, they are also used when focus is on physical functioning or psychological wellbeing. Modern applications often need more general models, typically models for multidimensional latent variables or longitudinal models for repeated measurements. This paper describes a SAS macro that fits two-dimensional polytomous Rasch models using a specification of the model that is sufficiently flexible to accommodate longitudinal Rasch models. The macro estimates item parameters using marginal maximum likelihood estimation. A graphical presentation of item characteristic curves is included.
Recent developments in maximum likelihood estimation of MTMM models for categorical data
Minjeong eJeon
2014-04-01
Full Text Available Maximum likelihood (ML estimation of categorical multitrait-multimethod (MTMM data is challenging because the likelihood involves high-dimensional integrals over the crossed method and trait factors, with no known closed-form solution.The purpose of the study is to introduce three newly developed ML methods that are eligible for estimating MTMM models with categorical responses: Variational maximization-maximization, Alternating imputation posterior, and Monte Carlo local likelihood. Each method is briefly described and its applicability for MTMM models with categorical data are discussed.An illustration is provided using an empirical example.
Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation.
Meyer, Karin
2016-08-01
Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty-derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated-rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined.
Seto, Junji; Wada, Takayuki; Iwamoto, Tomotada; Tamaru, Aki; Maeda, Shinji; Yamamoto, Kaori; Hase, Atsushi; Murakami, Koichi; Maeda, Eriko; Oishi, Akira; Migita, Yuji; Yamamoto, Taro; Ahiko, Tadayuki
2015-10-01
Intra-species phylogeny of Mycobacterium tuberculosis has been regarded as a clue to estimate its potential risk to develop drug-resistance and various epidemiological tendencies. Genotypic characterization of variable number of tandem repeats (VNTR), a standard tool to ascertain transmission routes, has been improving as a public health effort, but determining phylogenetic information from those efforts alone is difficult. We present a platform based on maximum a posteriori (MAP) estimation to estimate phylogenetic information for M. tuberculosis clinical isolates from individual profiles of VNTR types. This study used 1245 M. tuberculosis clinical isolates obtained throughout Japan for construction of an MAP estimation formula. Two MAP estimation formulae, classification of Beijing family and other lineages, and classification of five Beijing sublineages (ST11/26, STK, ST3, and ST25/19 belonging to the ancient Beijing subfamily and modern Beijing subfamily), were created based on 24 loci VNTR (24Beijing-VNTR) profiles and phylogenetic information of the isolates. Recursive estimation based on the formulae showed high concordance with their authentic phylogeny by multi-locus sequence typing (MLST) of the isolates. The formulae might further support phylogenetic estimation of the Beijing lineage M. tuberculosis from the VNTR genotype with various geographic backgrounds. These results suggest that MAP estimation can function as a reliable probabilistic process to append phylogenetic information to VNTR genotypes of M. tuberculosis independently, which might improve the usage of genotyping data for control, understanding, prevention, and treatment of TB.
On the rate of convergence of the maximum likelihood estimator of a k-monotone density
WELLNER; Jon; A
2009-01-01
Bounds for the bracketing entropy of the classes of bounded k-monotone functions on [0,A] are obtained under both the Hellinger distance and the Lp(Q) distance,where 1 p < ∞ and Q is a probability measure on [0,A].The result is then applied to obtain the rate of convergence of the maximum likelihood estimator of a k-monotone density.
On the rate of convergence of the maximum likelihood estimator of a K-monotone density
GAO FuChang; WELLNER Jon A
2009-01-01
Bounds for the bracketing entropy of the classes of bounded K-monotone functions on [0, A] are obtained under both the Hellinger distance and the LP(Q) distance, where 1 ≤ p < ∞ and Q is a probability measure on [0, A]. The result is then applied to obtain the rate of convergence of the maximum likelihood estimator of a K-monotone density.
YIN; Changming; ZHAO; Lincheng; WEI; Chengdong
2006-01-01
In a generalized linear model with q × 1 responses, the bounded and fixed (or adaptive) p × q regressors Zi and the general link function, under the most general assumption on the minimum eigenvalue of ∑ni=1 ZiZ'i, the moment condition on responses as weak as possible and the other mild regular conditions, we prove that the maximum quasi-likelihood estimates for the regression parameter vector are asymptotically normal and strongly consistent.
ASYMPTOTIC NORMALITY OF QUASI MAXIMUM LIKELIHOOD ESTIMATE IN GENERALIZED LINEAR MODELS
YUE LI; CHEN XIRU
2005-01-01
For the Generalized Linear Model (GLM), under some conditions including that the specification of the expectation is correct, it is shown that the Quasi Maximum Likelihood Estimate (QMLE) of the parameter-vector is asymptotic normal. It is also shown that the asymptotic covariance matrix of the QMLE reaches its minimum (in the positive-definte sense) in case that the specification of the covariance matrix is correct.
Magnard, Christophe; Small, David; Meier, Erich
2015-01-01
The phase estimation of cross-track multibaseline synthetic aperture interferometric data is usually thought to be very efficiently achieved using the maximum likelihood (ML) method. The suitability of this method is investigated here as applied to airborne single pass multibaseline data. Experimental interferometric data acquired with a Ka-band sensor were processed using (a) a ML method that fuses the complex data from all receivers and (b) a coarse-to-fine method that only uses the interme...
Maximum entropy estimation of glutamate and glutamine in MR spectroscopic imaging.
Rathi, Yogesh; Ning, Lipeng; Michailovich, Oleg; Liao, HuiJun; Gagoski, Borjan; Grant, P Ellen; Shenton, Martha E; Stern, Robert; Westin, Carl-Fredrik; Lin, Alexander
2014-01-01
Magnetic resonance spectroscopic imaging (MRSI) is often used to estimate the concentration of several brain metabolites. Abnormalities in these concentrations can indicate specific pathology, which can be quite useful in understanding the disease mechanism underlying those changes. Due to higher concentration, metabolites such as N-acetylaspartate (NAA), Creatine (Cr) and Choline (Cho) can be readily estimated using standard Fourier transform techniques. However, metabolites such as Glutamate (Glu) and Glutamine (Gln) occur in significantly lower concentrations and their resonance peaks are very close to each other making it difficult to accurately estimate their concentrations (separately). In this work, we propose to use the theory of 'Spectral Zooming' or high-resolution spectral analysis to separate the Glutamate and Glutamine peaks and accurately estimate their concentrations. The method works by estimating a unique power spectral density, which corresponds to the maximum entropy solution of a zero-mean stationary Gaussian process. We demonstrate our estimation technique on several physical phantom data sets as well as on in-vivo brain spectroscopic imaging data. The proposed technique is quite general and can be used to estimate the concentration of any other metabolite of interest.
Galili, Tal; Meilijson, Isaac
2016-01-02
The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].
Galili, Tal; Meilijson, Isaac
2016-01-01
The Rao–Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a “better” one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao–Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao–Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.] PMID:27499547
The effect of coupling hydrologic and hydrodynamic models on probable maximum flood estimation
Felder, Guido; Zischg, Andreas; Weingartner, Rolf
2017-07-01
Deterministic rainfall-runoff modelling usually assumes stationary hydrological system, as model parameters are calibrated with and therefore dependant on observed data. However, runoff processes are probably not stationary in the case of a probable maximum flood (PMF) where discharge greatly exceeds observed flood peaks. Developing hydrodynamic models and using them to build coupled hydrologic-hydrodynamic models can potentially improve the plausibility of PMF estimations. This study aims to assess the potential benefits and constraints of coupled modelling compared to standard deterministic hydrologic modelling when it comes to PMF estimation. The two modelling approaches are applied using a set of 100 spatio-temporal probable maximum precipitation (PMP) distribution scenarios. The resulting hydrographs, the resulting peak discharges as well as the reliability and the plausibility of the estimates are evaluated. The discussion of the results shows that coupling hydrologic and hydrodynamic models substantially improves the physical plausibility of PMF modelling, although both modelling approaches lead to PMF estimations for the catchment outlet that fall within a similar range. Using a coupled model is particularly suggested in cases where considerable flood-prone areas are situated within a catchment.
无
2007-01-01
This paper addresses the problems of parameter estimation of multivariable stationary stochastic systems on the basis of observed output data. The main contribution is to employ the expectation-maximisation (EM) method as a means for computation of the maximum-likelihood (ML) parameter estimation of the system. Closed form of the expectation of the studied system subjected to Gaussian distribution noise is derived and paraneter choice that maximizes the expectation is also proposed. This results in an iterative algorithm for parameter estimation and the robust algorithm implementation based on technique of QR-factorization and Cholesky factorization is also discussed. Moreover, algorithmic properties such as non-decreasing likelihood value, necessary and sufficient conditions for the algorithm to arrive at a local stationary parameter, the convergence rate and the factors affecting the convergence rate are analyzed. Simulation study shows that the proposed algorithm has attractive properties such as numerical stability, and avoidance of difficult initial conditions.
Gutenberg-Richter b-value maximum likelihood estimation and sample size
Nava, F. A.; Márquez-Ramírez, V. H.; Zúñiga, F. R.; Ávila-Barrientos, L.; Quinteros, C. B.
2017-01-01
The Aki-Utsu maximum likelihood method is widely used for estimation of the Gutenberg-Richter b-value, but not all authors are conscious of the method's limitations and implicit requirements. The Aki/Utsu method requires a representative estimate of the population mean magnitude; a requirement seldom satisfied in b-value studies, particularly in those that use data from small geographic and/or time windows, such as b-mapping and b-vs-time studies. Monte Carlo simulation methods are used to determine how large a sample is necessary to achieve representativity, particularly for rounded magnitudes. The size of a representative sample weakly depends on the actual b-value. It is shown that, for commonly used precisions, small samples give meaningless estimations of b. Our results give estimates on the probabilities of getting correct estimates of b for a given desired precision for samples of different sizes. We submit that all published studies reporting b-value estimations should include information about the size of the samples used.
Maximum Likelihood Estimation of Monocular Optical Flow Field for Mobile Robot Ego-motion
Huajun Liu
2016-01-01
Full Text Available This paper presents an optimized scheme of monocular ego-motion estimation to provide location and pose information for mobile robots with one fixed camera. First, a multi-scale hyper-complex wavelet phase-derived optical flow is applied to estimate micro motion of image blocks. Optical flow computation overcomes the difficulties of unreliable feature selection and feature matching of outdoor scenes; at the same time, the multi-scale strategy overcomes the problem of road surface self-similarity and local occlusions. Secondly, a support probability of flow vector is defined to evaluate the validity of the candidate image motions, and a Maximum Likelihood Estimation (MLE optical flow model is constructed based not only on image motion residuals but also their distribution of inliers and outliers, together with their support probabilities, to evaluate a given transform. This yields an optimized estimation of inlier parts of optical flow. Thirdly, a sampling and consensus strategy is designed to estimate the ego-motion parameters. Our model and algorithms are tested on real datasets collected from an intelligent vehicle. The experimental results demonstrate the estimated ego-motion parameters closely follow the GPS/INS ground truth in complex outdoor road scenarios.
Maximum likelihood estimation of parameterized 3-D surfaces using a moving camera
Hung, Y.; Cernuschi-Frias, B.; Cooper, D. B.
1987-01-01
A new approach is introduced to estimating object surfaces in three-dimensional space from a sequence of images. A surface of interest here is modeled as a 3-D function known up to the values of a few parameters. The approach will work with any parameterization. However, in work to date researchers have modeled objects as patches of spheres, cylinders, and planes - primitive objects. These primitive surfaces are special cases of 3-D quadric surfaces. Primitive surface estimation is treated as the general problem of maximum likelihood parameter estimation based on two or more functionally related data sets. In the present case, these data sets constitute a sequence of images taken at different locations and orientations. A simple geometric explanation is given for the estimation algorithm. Though various techniques can be used to implement this nonlinear estimation, researches discuss the use of gradient descent. Experiments are run and discussed for the case of a sphere of unknown location. These experiments graphically illustrate the various advantages of using as many images as possible in the estimation and of distributing camera positions from first to last over as large a baseline as possible. Researchers introduce the use of asymptotic Bayesian approximations in order to summarize the useful information in a sequence of images, thereby drastically reducing both the storage and amount of processing required.
Estimating the exceedance probability of extreme rainfalls up to the probable maximum precipitation
Nathan, Rory; Jordan, Phillip; Scorah, Matthew; Lang, Simon; Kuczera, George; Schaefer, Melvin; Weinmann, Erwin
2016-12-01
If risk-based criteria are used in the design of high hazard structures (such as dam spillways and nuclear power stations), then it is necessary to estimate the annual exceedance probability (AEP) of extreme rainfalls up to and including the Probable Maximum Precipitation (PMP). This paper describes the development and application of two largely independent methods to estimate the frequencies of such extreme rainfalls. One method is based on stochastic storm transposition (SST), which combines the "arrival" and "transposition" probabilities of an extreme storm using the total probability theorem. The second method, based on "stochastic storm regression" (SSR), combines frequency curves of point rainfalls with regression estimates of local and transposed areal rainfalls; rainfall maxima are generated by stochastically sampling the independent variates, where the required exceedance probabilities are obtained using the total probability theorem. The methods are applied to two large catchments (with areas of 3550 km2 and 15,280 km2) located in inland southern Australia. Both methods were found to provide similar estimates of the frequency of extreme areal rainfalls for the two study catchments. The best estimates of the AEP of the PMP for the smaller and larger of the catchments were found to be 10-7 and 10-6, respectively, but the uncertainty of these estimates spans one to two orders of magnitude. Additionally, the SST method was applied to a range of locations within a meteorologically homogenous region to investigate the nature of the relationship between the AEP of PMP and catchment area.
Application of Artificial Bee Colony Algorithm to Maximum Likelihood DOA Estimation
Zhicheng Zhang; Jun Lin; Yaowu Shi
2013-01-01
Maximum Likelihood (ML) method has an excellent performance for Direction-Of-Arrival (DOA) estimation,but a multidimensional nonlinear solution search is required which complicates the computation and prevents the method from practical use.To reduce the high computational burden of ML method and make it more suitable to engineering applications,we apply the Artificial Bee Colony (ABC) algorithm to maximize the likelihood function for DOA estimation.As a recently proposed bio-inspired computing algorithm,ABC algorithm is originally used to optimize multivariable functions by imitating the behavior of bee colony finding excellent nectar sources in the nature environment.It offers an excellent alternative to the conventional methods in ML-DOA estimation.The performance of ABC-based ML and other popular meta-heuristic-based ML methods for DOA estimation are compared for various scenarios of convergence,Signal-to-Noise Ratio (SNR),and number of iterations.The computation loads of ABC-based ML and the conventional ML methods for DOA estimation are also investigated.Simulation results demonstrate that the proposed ABC based method is more efficient in computation and statistical performance than other ML-based DOA estimation methods.
Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost
Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)
2015-02-15
This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.
Iammarino, Marco; Di Taranto, Aurelia; Muscarella, Marilena
2012-02-01
Sulphiting agents are commonly used food additives. They are not allowed in fresh meat preparations. In this work, 2250 fresh meat samples were analysed to establish the maximum concentration of sulphites that can be considered as "natural" and therefore be admitted in fresh meat preparations. The analyses were carried out by an optimised Monier-Williams Method and the positive samples confirmed by ion chromatography. Sulphite concentrations higher than the screening method LOQ (10.0 mg · kg(-1)) were found in 100 samples. Concentrations higher than 76.6 mg · kg(-1), attributable to sulphiting agent addition, were registered in 40 samples. Concentrations lower than 41.3 mg · kg(-1) were registered in 60 samples. Taking into account the distribution of sulphite concentrations obtained, it is plausible to estimate a maximum allowable limit of 40.0 mg · kg(-1) (expressed as SO(2)). Below this value the samples can be considered as "compliant".
Noise removal in multichannel image data by a parametric maximum noise fraction estimator
Conradsen, Knut; Ersbøll, Bjarne Kjær; Nielsen, Allan Aasbjerg
1991-01-01
Some approaches to noise removal in multispectral imagery are presented. The primary contribution of the present work is the establishment of several ways of estimating the noise covariance matrix from image data and a comparison of the noise separation performances. A case study with Landsat MSS...... data demonstrates that the principal components are not sorted correctly in terms of visual image quality, whereas the minimum/maximum autocorrelation factors and the maximum noise fractions (MAFs) are. A case study with Landsat TM data shows an ordering which is consistent with the spatial wavelength...... in the components. The case studies indicate that a better noise separation is attained when using more complex noise models than the simple model implied by MAF analysis. (L.M.)...
Selva, J
2011-01-01
This paper presents an efficient method to compute the maximum likelihood (ML) estimation of the parameters of a complex 2-D sinusoidal, with the complexity order of the FFT. The method is based on an accurate barycentric formula for interpolating band-limited signals, and on the fact that the ML cost function can be viewed as a signal of this type, if the time and frequency variables are switched. The method consists in first computing the DFT of the data samples, and then locating the maximum of the cost function by means of Newton's algorithm. The fact is that the complexity of the latter step is small and independent of the data size, since it makes use of the barycentric formula for obtaining the values of the cost function and its derivatives. Thus, the total complexity order is that of the FFT. The method is validated in a numerical example.
Rizzo, R. E.; Healy, D.; De Siena, L.
2017-02-01
The success of any predictive model is largely dependent on the accuracy with which its parameters are known. When characterising fracture networks in rocks, one of the main issues is accurately scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture lengths and apertures are fundamental to estimate bulk permeability and therefore fluid flow, especially for rocks with low primary porosity where most of the flow takes place within fractures. We collected outcrop data from a fractured upper Miocene biosiliceous mudstone formation (California, USA), which exhibits seepage of bitumen-rich fluids through the fractures. The dataset was analysed using Maximum Likelihood Estimators to extract the underlying scaling parameters, and we found a log-normal distribution to be the best representative statistic for both fracture lengths and apertures in the study area. By applying Maximum Likelihood Estimators on outcrop fracture data, we generate fracture network models with the same statistical attributes to the ones observed on outcrop, from which we can achieve more robust predictions of bulk permeability.
Sex-Specific Equations to Estimate Maximum Oxygen Uptake in Cycle Ergometry.
Souza e Silva, Christina G de; Araújo, Claudio Gil S
2015-10-01
Aerobic fitness, assessed by measuring VO2max in maximum cardiopulmonary exercise testing (CPX) or by estimating VO2max through the use of equations in exercise testing, is a predictor of mortality. However, the error resulting from this estimate in a given individual can be high, affecting clinical decisions. To determine the error of estimate of VO2max in cycle ergometry in a population attending clinical exercise testing laboratories, and to propose sex-specific equations to minimize that error. This study assessed 1715 adults (18 to 91 years, 68% men) undertaking maximum CPX in a lower limbs cycle ergometer (LLCE) with ramp protocol. The percentage error (E%) between measured VO2max and that estimated from the modified ACSM equation (Lang et al. MSSE, 1992) was calculated. Then, estimation equations were developed: 1) for all the population tested (C-GENERAL); and 2) separately by sex (C-MEN and C-WOMEN). Measured VO2max was higher in men than in WOMEN: -29.4 ± 10.5 and 24.2 ± 9.2 mL.(kg.min)-1 (p VO2max [in mL.(kg.min)-1] were: C-GENERAL = [final workload (W)/body weight (kg)] x 10.483 + 7; C-MEN = [final workload (W)/body weight (kg)] x 10.791 + 7; and C-WOMEN = [final workload (W)/body weight (kg)] x 9.820 + 7. The E% for MEN was: -3.4 ± 13.4% (modified ACSM); 1.2 ± 13.2% (C-GENERAL); and -0.9 ± 13.4% (C-MEN) (p VO2max by use of sex-specific equations was reduced, but not eliminated, in exercise tests on LLCE.
Lukeš, Tomáš; Křížek, Pavel; Švindrych, Zdeněk; Benda, Jakub; Ovesný, Martin; Fliegel, Karel; Klíma, Miloš; Hagen, Guy M
2014-12-01
We introduce and demonstrate a new high performance image reconstruction method for super-resolution structured illumination microscopy based on maximum a posteriori probability estimation (MAP-SIM). Imaging performance is demonstrated on a variety of fluorescent samples of different thickness, labeling density and noise levels. The method provides good suppression of out of focus light, improves spatial resolution, and allows reconstruction of both 2D and 3D images of cells even in the case of weak signals. The method can be used to process both optical sectioning and super-resolution structured illumination microscopy data to create high quality super-resolution images.
Qibing GAO; Yaohua WU; Chunhua ZHU; Zhanfeng WANG
2008-01-01
In generalized linear models with fixed design, under the assumption ~ →∞ and otherregularity conditions, the asymptotic normality of maximum quasi-likelihood estimator (β)n, which is the root of the quasi-likelihood equation with natural link function ∑n/i=1Xi(yi-μ(X1/iβ))=0, is obtained,where λ/-n denotes the minimum eigenvalue of ∑n/i=1XiX/1/i, Xi are bounded p x q regressors, and yi are q × 1 responses.
Terror birds on the run: a mechanical model to estimate its maximum running speed
Blanco, R. Ernesto; Jones, Washington W
2005-01-01
‘Terror bird’ is a common name for the family Phorusrhacidae. These large terrestrial birds were probably the dominant carnivores on the South American continent from the Middle Palaeocene to the Pliocene–Pleistocene limit. Here we use a mechanical model based on tibiotarsal strength to estimate maximum running speeds of three species of terror birds: Mesembriornis milneedwardsi, Patagornis marshi and a specimen of Phorusrhacinae gen. The model is proved on three living large terrestrial bird species. On the basis of the tibiotarsal strength we propose that Mesembriornis could have used its legs to break long bones and access their marrow. PMID:16096087
Two-Stage Maximum Likelihood Estimation (TSMLE for MT-CDMA Signals in the Indoor Environment
Sesay Abu B
2004-01-01
Full Text Available This paper proposes a two-stage maximum likelihood estimation (TSMLE technique suited for multitone code division multiple access (MT-CDMA system. Here, an analytical framework is presented in the indoor environment for determining the average bit error rate (BER of the system, over Rayleigh and Ricean fading channels. The analytical model is derived for quadrature phase shift keying (QPSK modulation technique by taking into account the number of tones, signal bandwidth (BW, bit rate, and transmission power. Numerical results are presented to validate the analysis, and to justify the approximations made therein. Moreover, these results are shown to agree completely with those obtained by simulation.
Khairuzzaman, Md; Zhang, Chao; Igarashi, Koji; Katoh, Kazuhiro; Kikuchi, Kazuro
2010-03-01
We describe a successful introduction of maximum-likelihood-sequence estimation (MLSE) into digital coherent receivers together with finite-impulse response (FIR) filters in order to equalize both linear and nonlinear fiber impairments. The MLSE equalizer based on the Viterbi algorithm is implemented in the offline digital signal processing (DSP) core. We transmit 20-Gbit/s quadrature phase-shift keying (QPSK) signals through a 200-km-long standard single-mode fiber. The bit-error rate performance shows that the MLSE equalizer outperforms the conventional adaptive FIR filter, especially when nonlinear impairments are predominant.
Adaptive speckle reduction of ultrasound images based on maximum likelihood estimation
Xu Liu(刘旭); Yongfeng Huang(黄永锋); Wende Shou(寿文德); Tao Ying(应涛)
2004-01-01
A method has been developed in this paper to gain effective speckle reduction in medical ultrasound images.To exploit full knowledge of the speckle distribution, here maximum likelihood was used to estimate speckle parameters corresponding to its statistical mode. Then the results were incorporated into the nonlinear anisotropic diffusion to achieve adaptive speckle reduction. Verified with simulated and ultrasound images,we show that this algorithm is capable of enhancing features of clinical interest and reduces speckle noise more efficiently than just applying classical filters. To avoid edge contribution, changes of contrast-to-noise ratio of different regions are also compared to investigate the performance of this approach.
Magnard, C.; Small, D.; Meier, E.
2015-03-01
The phase estimation of cross-track multibaseline synthetic aperture interferometric data is usually thought to be very efficiently achieved using the maximum likelihood (ML) method. The suitability of this method is investigated here as applied to airborne single pass multibaseline data. Experimental interferometric data acquired with a Ka-band sensor were processed using (a) a ML method that fuses the complex data from all receivers and (b) a coarse-to-fine method that only uses the intermediate baselines to unwrap the phase values from the longest baseline. The phase noise was analyzed for both methods: in most cases, a small improvement was found when the ML method was used.
Suligowski, Roman
2014-05-01
Probable Maximum Precipitation based upon the physical mechanisms of precipitation formation at the Kielce Upland. This estimation stems from meteorological analysis of extremely high precipitation events, which occurred in the area between 1961 and 2007 causing serious flooding from rivers that drain the entire Kielce Upland. Meteorological situation has been assessed drawing on the synoptic maps, baric topography charts, satellite and radar images as well as the results of meteorological observations derived from surface weather observation stations. Most significant elements of this research include the comparison between distinctive synoptic situations over Europe and subsequent determination of typical rainfall generating mechanism. This allows the author to identify the source areas of air masses responsible for extremely high precipitation at the Kielce Upland. Analysis of the meteorological situations showed, that the source areas for humid air masses which cause the largest rainfalls at the Kielce Upland are the area of northern Adriatic Sea and the north-eastern coast of the Black Sea. Flood hazard at the Kielce Upland catchments was triggered by daily precipitation of over 60 mm. The highest representative dew point temperature in source areas of warm air masses (these responsible for high precipitation at the Kielce Upland) exceeded 20 degrees Celsius with a maximum of 24.9 degrees Celsius while precipitable water amounted to 80 mm. The value of precipitable water is also used for computation of factors featuring the system, namely the mass transformation factor and the system effectiveness factor. The mass transformation factor is computed based on precipitable water in the feeding mass and precipitable water in the source area. The system effectiveness factor (as the indicator of the maximum inflow velocity and the maximum velocity in the zone of front or ascending currents, forced by orography) is computed from the quotient of precipitable water in
Quasi-Maximum Likelihood Estimators in Generalized Linear Models with Autoregressive Processes
Hong Chang HU; Lei SONG
2014-01-01
The paper studies a generalized linear model (GLM) yt=h(xTtβ)+εt, t=1, 2, . . . , n, whereε1=η1,εt=ρεt-1+ηt, t=2,3,...,n, h is a continuous diff erentiable function,ηt’s are independent and identically distributed random errors with zero mean and finite varianceσ 2. Firstly, the quasi-maximum likelihood (QML) estimators ofβ,ρandσ 2 are given. Secondly, under mild conditions, the asymptotic properties (including the existence, weak consistency and asymptotic distribution) of the QML estimators are investigated. Lastly, the validity of method is illuminated by a simulation example.
da Silva, A J; Santos, D O C; Lima, R F
2013-01-01
Recently, we demonstrated the existence of nonextensivity in neuromuscular transmission [Phys. Rev. E 84, 041925 (2011)]. In the present letter, we propose a general criterion based on the q-calculus foundations and nonextensive statistics to estimate the values for both scale factor and q-index using the maximum likelihood q-estimation method (MLqE). We next applied our theoretical findings to electrophysiological recordings from neuromuscular junction (NMJ) where spontaneous miniature end plate potentials (MEPP) were analyzed. These calculations were performed in both normal and high extracellular potassium concentration, [K+]o. This protocol was assumed to test the validity of the q-index in electrophysiological conditions closely resembling physiological stimuli. Surprisingly, the analysis showed a significant difference between the q-index in high and normal [K+]o, where the magnitude of nonextensivity was increased. Our letter provides a general way to obtain the best q-index from the q-Gaussian distrib...
Frequency-Domain Maximum-Likelihood Estimation of High-Voltage Pulse Transformer Model Parameters
Aguglia, D
2014-01-01
This paper presents an offline frequency-domain nonlinear and stochastic identification method for equivalent model parameter estimation of high-voltage pulse transformers. Such kinds of transformers are widely used in the pulsed-power domain, and the difficulty in deriving pulsed-power converter optimal control strategies is directly linked to the accuracy of the equivalent circuit parameters. These components require models which take into account electric fields energies represented by stray capacitance in the equivalent circuit. These capacitive elements must be accurately identified, since they greatly influence the general converter performances. A nonlinear frequency-based identification method, based on maximum-likelihood estimation, is presented, and a sensitivity analysis of the best experimental test to be considered is carried out. The procedure takes into account magnetic saturation and skin effects occurring in the windings during the frequency tests. The presented method is validated by experim...
Howell, L W
2002-01-01
The method of Maximum Likelihood (ML) is used to estimate the spectral parameters of an assumed broken power law energy spectrum from simulated detector responses. This methodology, which requires the complete specificity of all cosmic-ray detector design parameters, is shown to provide approximately unbiased, minimum variance, and normally distributed spectra information for events detected by an instrument having a wide range of commonly used detector response functions. The ML procedure, coupled with the simulated performance of a proposed space-based detector and its planned life cycle, has proved to be of significant value in the design phase of a new science instrument. The procedure helped make important trade studies in design parameters as a function of the science objectives, which is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope. This ML methodology is then generalized to estimate bro...
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.
El Gharamti, Mohamad
2014-02-01
The accuracy of groundwater flow and transport model predictions highly depends on our knowledge of subsurface physical parameters. Assimilation of contaminant concentration data from shallow dug wells could help improving model behavior, eventually resulting in better forecasts. In this paper, we propose a joint state-parameter estimation scheme which efficiently integrates a low-rank extended Kalman filtering technique, namely the Singular Evolutive Extended Kalman (SEEK) filter, with the prominent complex-step method (CSM). The SEEK filter avoids the prohibitive computational burden of the Extended Kalman filter by updating the forecast along the directions of error growth only, called filter correction directions. CSM is used within the SEEK filter to efficiently compute model derivatives with respect to the state and parameters along the filter correction directions. CSM is derived using complex Taylor expansion and is second order accurate. It is proven to guarantee accurate gradient computations with zero numerical round-off errors, but requires complexifying the numerical code. We perform twin-experiments to test the performance of the CSM-based SEEK for estimating the state and parameters of a subsurface contaminant transport model. We compare the efficiency and the accuracy of the proposed scheme with two standard finite difference-based SEEK filters as well as with the ensemble Kalman filter (EnKF). Assimilation results suggest that the use of the CSM in the context of the SEEK filter may provide up to 80% more accurate solutions when compared to standard finite difference schemes and is competitive with the EnKF, even providing more accurate results in certain situations. We analyze the results based on two different observation strategies. We also discuss the complexification of the numerical code and show that this could be efficiently implemented in the context of subsurface flow models. © 2013 Elsevier B.V.
A probabilistic estimate of maximum acceleration in rock in the contiguous United States
Algermissen, Sylvester Theodore; Perkins, David M.
1976-01-01
This paper presents a probabilistic estimate of the maximum ground acceleration to be expected from earthquakes occurring in the contiguous United States. It is based primarily upon the historic seismic record which ranges from very incomplete before 1930 to moderately complete after 1960. Geologic data, primarily distribution of faults, have been employed only to a minor extent, because most such data have not been interpreted yet with earthquake hazard evaluation in mind.The map provides a preliminary estimate of the relative hazard in various parts of the country. The report provides a method for evaluating the relative importance of the many parameters and assumptions in hazard analysis. The map and methods of evaluation described reflect the current state of understanding and are intended to be useful for engineering purposes in reducing the effects of earthquakes on buildings and other structures.Studies are underway on improved methods for evaluating the relativ( earthquake hazard of different regions. Comments on this paper are invited to help guide future research and revisions of the accompanying map.The earthquake hazard in the United States has been estimated in a variety of ways since the initial effort by Ulrich (see Roberts and Ulrich, 1950). In general, the earlier maps provided an estimate of the severity of ground shaking or damage but the frequency of occurrence of the shaking or damage was not given. Ulrich's map showed the distribution of expected damage in terms of no damage (zone 0), minor damage (zone 1), moderate damage (zone 2), and major damage (zone 3). The zones were not defined further and the frequency of occurrence of damage was not suggested. Richter (1959) and Algermissen (1969) estimated the ground motion in terms of maximum Modified Mercalli intensity. Richter used the terms "occasional" and "frequent" to characterize intensity IX shaking and Algermissen included recurrence curves for various parts of the country in the paper
Sex-Specific Equations to Estimate Maximum Oxygen Uptake in Cycle Ergometry
Christina G. de Souza e Silva
2015-01-01
Full Text Available Abstract Background: Aerobic fitness, assessed by measuring VO2max in maximum cardiopulmonary exercise testing (CPX or by estimating VO2max through the use of equations in exercise testing, is a predictor of mortality. However, the error resulting from this estimate in a given individual can be high, affecting clinical decisions. Objective: To determine the error of estimate of VO2max in cycle ergometry in a population attending clinical exercise testing laboratories, and to propose sex-specific equations to minimize that error. Methods: This study assessed 1715 adults (18 to 91 years, 68% men undertaking maximum CPX in a lower limbs cycle ergometer (LLCE with ramp protocol. The percentage error (E% between measured VO2max and that estimated from the modified ACSM equation (Lang et al. MSSE, 1992 was calculated. Then, estimation equations were developed: 1 for all the population tested (C-GENERAL; and 2 separately by sex (C-MEN and C-WOMEN. Results: Measured VO2max was higher in men than in WOMEN: -29.4 ± 10.5 and 24.2 ± 9.2 mL.(kg.min-1 (p < 0.01. The equations for estimating VO2max [in mL.(kg.min-1] were: C-GENERAL = [final workload (W/body weight (kg] x 10.483 + 7; C-MEN = [final workload (W/body weight (kg] x 10.791 + 7; and C-WOMEN = [final workload (W/body weight (kg] x 9.820 + 7. The E% for MEN was: -3.4 ± 13.4% (modified ACSM; 1.2 ± 13.2% (C-GENERAL; and -0.9 ± 13.4% (C-MEN (p < 0.01. For WOMEN: -14.7 ± 17.4% (modified ACSM; -6.3 ± 16.5% (C-GENERAL; and -1.7 ± 16.2% (C-WOMEN (p < 0.01. Conclusion: The error of estimate of VO2max by use of sex-specific equations was reduced, but not eliminated, in exercise tests on LLCE.
A maximum likelihood approach to estimating articulator positions from speech acoustics
Hogden, J.
1996-09-23
This proposal presents an algorithm called maximum likelihood continuity mapping (MALCOM) which recovers the positions of the tongue, jaw, lips, and other speech articulators from measurements of the sound-pressure waveform of speech. MALCOM differs from other techniques for recovering articulator positions from speech in three critical respects: it does not require training on measured or modeled articulator positions, it does not rely on any particular model of sound propagation through the vocal tract, and it recovers a mapping from acoustics to articulator positions that is linearly, not topographically, related to the actual mapping from acoustics to articulation. The approach categorizes short-time windows of speech into a finite number of sound types, and assumes the probability of using any articulator position to produce a given sound type can be described by a parameterized probability density function. MALCOM then uses maximum likelihood estimation techniques to: (1) find the most likely smooth articulator path given a speech sample and a set of distribution functions (one distribution function for each sound type), and (2) change the parameters of the distribution functions to better account for the data. Using this technique improves the accuracy of articulator position estimates compared to continuity mapping -- the only other technique that learns the relationship between acoustics and articulation solely from acoustics. The technique has potential application to computer speech recognition, speech synthesis and coding, teaching the hearing impaired to speak, improving foreign language instruction, and teaching dyslexics to read. 34 refs., 7 figs.
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
无
2004-01-01
［1］McCullagh, P., Nelder, J. A., Generalized Linear Models, New York: Chapman and Hall, 1989.［2］Wedderbum, R. W. M., Quasi-likelihood functions, generalized linear models and Gauss-Newton method,Biometrika, 1974, 61:439-447.［3］Fahrmeir, L., Maximum likelihood estimation in misspecified generalized linear models, Statistics, 1990, 21:487-502.［4］Fahrmeir, L., Kaufmann, H., Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models, Ann. Statist., 1985, 13: 342-368.［5］Melder, J. A., Pregibon, D., An extended quasi-likelihood function, Biometrika, 1987, 74: 221-232.［6］Bennet, G., Probability inequalities for the sum of independent random variables, JASA, 1962, 57: 33-45.［7］Stout, W. F., Almost Sure Convergence, New York:Academic Press, 1974.［8］Petrov, V, V., Sums of Independent Random Variables, Berlin, New York: Springer-Verlag, 1975.
Mat Jan, Nur Amalina; Shabri, Ani
2017-01-01
TL-moments approach has been used in an analysis to identify the best-fitting distributions to represent the annual series of maximum streamflow data over seven stations in Johor, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: Three-parameter lognormal (LN3) and Pearson Type III (P3) distribution. The main objective of this study is to derive the TL-moments ( t 1,0), t 1 = 1,2,3,4 methods for LN3 and P3 distributions. The performance of TL-moments ( t 1,0), t 1 = 1,2,3,4 was compared with L-moments through Monte Carlo simulation and streamflow data over a station in Johor, Malaysia. The absolute error is used to test the influence of TL-moments methods on estimated probability distribution functions. From the cases in this study, the results show that TL-moments with four trimmed smallest values from the conceptual sample (TL-moments [4, 0]) of LN3 distribution was the most appropriate in most of the stations of the annual maximum streamflow series in Johor, Malaysia.
Mohammad H. Radfar
2006-11-01
Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.
Dansereau Richard M
2007-01-01
Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.
K. Yao
2007-12-01
Full Text Available We investigate the maximum likelihood (ML direction-of-arrival (DOA estimation of multiple wideband sources in the presence of unknown nonuniform sensor noise. New closed-form expression for the direction estimation CramÃƒÂ©r-Rao-Bound (CRB has been derived. The performance of the conventional wideband uniform ML estimator under nonuniform noise has been studied. In order to mitigate the performance degradation caused by the nonuniformity of the noise, a new deterministic wideband nonuniform ML DOA estimator is derived and two associated processing algorithms are proposed. The first algorithm is based on an iterative procedure which stepwise concentrates the log-likelihood function with respect to the DOAs and the noise nuisance parameters, while the second is a noniterative algorithm that maximizes the derived approximately concentrated log-likelihood function. The performance of the proposed algorithms is tested through extensive computer simulations. Simulation results show the stepwise-concentrated ML algorithm (SC-ML requires only a few iterations to converge and both the SC-ML and the approximately-concentrated ML algorithm (AC-ML attain a solution close to the derived CRB at high signal-to-noise ratio.
Benefit-cost estimation for alternative drinking water maximum contaminant levels
Gurian, Patrick L.; Small, Mitchell J.; Lockwood, John R.; Schervish, Mark J.
2001-08-01
A simulation model for estimating compliance behavior and resulting costs at U.S. Community Water Suppliers is developed and applied to the evaluation of a more stringent maximum contaminant level (MCL) for arsenic. Probability distributions of source water arsenic concentrations are simulated using a statistical model conditioned on system location (state) and source water type (surface water or groundwater). This model is fit to two recent national surveys of source waters, then applied with the model explanatory variables for the population of U.S. Community Water Suppliers. Existing treatment types and arsenic removal efficiencies are also simulated. Utilities with finished water arsenic concentrations above the proposed MCL are assumed to select the least cost option compatible with their existing treatment from among 21 available compliance strategies and processes for meeting the standard. Estimated costs and arsenic exposure reductions at individual suppliers are aggregated to estimate the national compliance cost, arsenic exposure reduction, and resulting bladder cancer risk reduction. Uncertainties in the estimates are characterized based on uncertainties in the occurrence model parameters, existing treatment types, treatment removal efficiencies, costs, and the bladder cancer dose-response function for arsenic.
Esra Saatci
2010-01-01
Full Text Available We propose a procedure to estimate the model parameters of presented nonlinear Resistance-Capacitance (RC and the widely used linear Resistance-Inductance-Capacitance (RIC models of the respiratory system by Maximum Likelihood Estimator (MLE. The measurement noise is assumed to be Generalized Gaussian Distributed (GGD, and the variance and the shape factor of the measurement noise are estimated by MLE and Kurtosis method, respectively. The performance of the MLE algorithm is also demonstrated by the Cramer-Rao Lower Bound (CRLB with artificially produced respiratory signals. Airway flow, mask pressure, and lung volume are measured from patients with Chronic Obstructive Pulmonary Disease (COPD under the noninvasive ventilation and from healthy subjects. Simulations show that respiratory signals from healthy subjects are better represented by the RIC model compared to the nonlinear RC model. On the other hand, the Patient group respiratory signals are fitted to the nonlinear RC model with lower measurement noise variance, better converged measurement noise shape factor, and model parameter tracks. Also, it is observed that for the Patient group the shape factor of the measurement noise converges to values between 1 and 2 whereas for the Control group shape factor values are estimated in the super-Gaussian area.
A Maximum a Posteriori Estimation Framework for Robust High Dynamic Range Video Synthesis.
Li, Yuelong; Lee, Chul; Monga, Vishal
2017-03-01
High dynamic range (HDR) image synthesis from multiple low dynamic range exposures continues to be actively researched. The extension to HDR video synthesis is a topic of significant current interest due to potential cost benefits. For HDR video, a stiff practical challenge presents itself in the form of accurate correspondence estimation of objects between video frames. In particular, loss of data resulting from poor exposures and varying intensity makes conventional optical flow methods highly inaccurate. We avoid exact correspondence estimation by proposing a statistical approach via maximum a posterior estimation, and under appropriate statistical assumptions and choice of priors and models, we reduce it to an optimization problem of solving for the foreground and background of the target frame. We obtain the background through rank minimization and estimate the foreground via a novel multiscale adaptive kernel regression technique, which implicitly captures local structure and temporal motion by solving an unconstrained optimization problem. Extensive experimental results on both real and synthetic data sets demonstrate that our algorithm is more capable of delivering high-quality HDR videos than current state-of-the-art methods, under both subjective and objective assessments. Furthermore, a thorough complexity analysis reveals that our algorithm achieves better complexity-performance tradeoff than conventional methods.
State estimation of Atlantic Ocean circulation at the Last Glacial Maximum
Dail, Holly; Heimbach, Patrick; Wunsch, Carl
2010-05-01
Preliminary results are presented from application of state estimation techniques to Atlantic Ocean circulation at the Last Glacial Maximum (LGM). An extended North Atlantic (33S to 75N) is modeled using the MIT General Circulation Model. ICE-5G 21K bathymetry and first guess atmospheric forcing fields from fully coupled CCSM3 LGM simulations are used. The model is least-squares fit to the proxies using an algorithm based on the model adjoint. A one-degree resolution, basin-scale setup was chosen so that the adjointed model remains efficient and processes that influence circulation over decades and longer are accessible. Estimates are sought of the ocean circulation that are dynamically consistent and within error bounds of available LGM proxy records. As compared to modern ocean state estimation, challenges include large and uncertain errors, data sparsity, and poorly known atmospheric circulation. The initial focus is on sea surface temperature and sea ice extent data compiled and unified by the Multiproxy Approach for the Reconstruction of the Glacial Ocean surface (MARGO) project. Estimates are made of the wind field and atmospheric heat flux adjustments required to obtain a consistent ocean circulation solution.
Kirkpatrick Mark
2005-01-01
Full Text Available Abstract Principal component analysis is a widely used 'dimension reduction' technique, albeit generally at a phenotypic level. It is shown that we can estimate genetic principal components directly through a simple reparameterisation of the usual linear, mixed model. This is applicable to any analysis fitting multiple, correlated genetic effects, whether effects for individual traits or sets of random regression coefficients to model trajectories. Depending on the magnitude of genetic correlation, a subset of the principal component generally suffices to capture the bulk of genetic variation. Corresponding estimates of genetic covariance matrices are more parsimonious, have reduced rank and are smoothed, with the number of parameters required to model the dispersion structure reduced from k(k + 1/2 to m(2k - m + 1/2 for k effects and m principal components. Estimation of these parameters, the largest eigenvalues and pertaining eigenvectors of the genetic covariance matrix, via restricted maximum likelihood using derivatives of the likelihood, is described. It is shown that reduced rank estimation can reduce computational requirements of multivariate analyses substantially. An application to the analysis of eight traits recorded via live ultrasound scanning of beef cattle is given.
Limit Distribution Theory for Maximum Likelihood Estimation of a Log-Concave Density.
Balabdaoui, Fadoua; Rufibach, Kaspar; Wellner, Jon A
2009-06-01
We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a log-concave density, i.e. a density of the form f(0) = exp varphi(0) where varphi(0) is a concave function on R. Existence, form, characterizations and uniform rates of convergence of the MLE are given by Rufibach (2006) and Dümbgen and Rufibach (2007). The characterization of the log-concave MLE in terms of distribution functions is the same (up to sign) as the characterization of the least squares estimator of a convex density on [0, infinity) as studied by Groeneboom, Jongbloed and Wellner (2001b). We use this connection to show that the limiting distributions of the MLE and its derivative are, under comparable smoothness assumptions, the same (up to sign) as in the convex density estimation problem. In particular, changing the smoothness assumptions of Groeneboom, Jongbloed and Wellner (2001b) slightly by allowing some higher derivatives to vanish at the point of interest, we find that the pointwise limiting distributions depend on the second and third derivatives at 0 of H(k), the "lower invelope" of an integrated Brownian motion process minus a drift term depending on the number of vanishing derivatives of varphi(0) = log f(0) at the point of interest. We also establish the limiting distribution of the resulting estimator of the mode M(f(0)) and establish a new local asymptotic minimax lower bound which shows the optimality of our mode estimator in terms of both rate of convergence and dependence of constants on population values.
Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming
2011-06-01
Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005-2007.
Jat, Prahlad; Serre, Marc L
2016-12-01
Widespread contamination of surface water chloride is an emerging environmental concern. Consequently accurate and cost-effective methods are needed to estimate chloride along all river miles of potentially contaminated watersheds. Here we introduce a Bayesian Maximum Entropy (BME) space/time geostatistical estimation framework that uses river distances, and we compare it with Euclidean BME to estimate surface water chloride from 2005 to 2014 in the Gunpowder-Patapsco, Severn, and Patuxent subbasins in Maryland. River BME improves the cross-validation R(2) by 23.67% over Euclidean BME, and river BME maps are significantly different than Euclidean BME maps, indicating that it is important to use river BME maps to assess water quality impairment. The river BME maps of chloride concentration show wide contamination throughout Baltimore and Columbia-Ellicott cities, the disappearance of a clean buffer separating these two large urban areas, and the emergence of multiple localized pockets of contamination in surrounding areas. The number of impaired river miles increased by 0.55% per year in 2005-2009 and by 1.23% per year in 2011-2014, corresponding to a marked acceleration of the rate of impairment. Our results support the need for control measures and increased monitoring of unassessed river miles.
Conditional maximum likelihood estimation in semiparametric transformation model with LTRC data.
Chen, Chyong-Mei; Shen, Pao-Sheng
2017-02-06
Left-truncated data often arise in epidemiology and individual follow-up studies due to a biased sampling plan since subjects with shorter survival times tend to be excluded from the sample. Moreover, the survival time of recruited subjects are often subject to right censoring. In this article, a general class of semiparametric transformation models that include proportional hazards model and proportional odds model as special cases is studied for the analysis of left-truncated and right-censored data. We propose a conditional likelihood approach and develop the conditional maximum likelihood estimators (cMLE) for the regression parameters and cumulative hazard function of these models. The derived score equations for regression parameter and infinite-dimensional function suggest an iterative algorithm for cMLE. The cMLE is shown to be consistent and asymptotically normal. The limiting variances for the estimators can be consistently estimated using the inverse of negative Hessian matrix. Intensive simulation studies are conducted to investigate the performance of the cMLE. An application to the Channing House data is given to illustrate the methodology.
Guo Jianhua
2008-01-01
Full Text Available Abstract Background The goal of linkage analysis is to determine the chromosomal location of the gene(s for a trait of interest such as a common disease. Three-locus linkage analysis is an important case of multi-locus problems. Solutions can be found analytically for the case of triple backcross mating. However, in the present study of linkage analysis and gene mapping some natural inequality restrictions on parameters have not been considered sufficiently, when the maximum likelihood estimates (MLEs of the two-locus recombination fractions are calculated. Results In this paper, we present a study of estimating the two-locus recombination fractions for the phase-unknown triple backcross with two offspring in each family in the framework of some natural and necessary parameter restrictions. A restricted expectation-maximization (EM algorithm, called REM is developed. We also consider some extensions in which the proposed REM can be taken as a unified method. Conclusion Our simulation work suggests that the REM performs well in the estimation of recombination fractions and outperforms current method. We apply the proposed method to a published data set of mouse backcross families.
Gabarro, Carolina; Turiel, Antonio; Elosegui, Pedro; Pla-Resina, Joaquim A.; Portabella, Marcos
2017-08-01
Monitoring sea ice concentration is required for operational and climate studies in the Arctic Sea. Technologies used so far for estimating sea ice concentration have some limitations, for instance the impact of the atmosphere, the physical temperature of ice, and the presence of snow and melting. In the last years, L-band radiometry has been successfully used to study some properties of sea ice, remarkably sea ice thickness. However, the potential of satellite L-band observations for obtaining sea ice concentration had not yet been explored. In this paper, we present preliminary evidence showing that data from the Soil Moisture Ocean Salinity (SMOS) mission can be used to estimate sea ice concentration. Our method, based on a maximum-likelihood estimator (MLE), exploits the marked difference in the radiative properties of sea ice and seawater. In addition, the brightness temperatures of 100 % sea ice and 100 % seawater, as well as their combined values (polarization and angular difference), have been shown to be very stable during winter and spring, so they are robust to variations in physical temperature and other geophysical parameters. Therefore, we can use just two sets of tie points, one for summer and another for winter, for calculating sea ice concentration, leading to a more robust estimate. After analysing the full year 2014 in the entire Arctic, we have found that the sea ice concentration obtained with our method is well determined as compared to the Ocean and Sea Ice Satellite Application Facility (OSI SAF) dataset. However, when thin sea ice is present (ice thickness ≲ 0.6 m), the method underestimates the actual sea ice concentration. Our results open the way for a systematic exploitation of SMOS data for monitoring sea ice concentration, at least for specific seasons. Additionally, SMOS data can be synergistically combined with data from other sensors to monitor pan-Arctic sea ice conditions.
Zhou, Si-Da; Heylen, Ward; Sas, Paul; Liu, Li
2014-05-01
This paper investigates the problem of modal parameter estimation of time-varying structures under unknown excitation. A time-frequency-domain maximum likelihood estimator of modal parameters for linear time-varying structures is presented by adapting the frequency-domain maximum likelihood estimator to the time-frequency domain. The proposed estimator is parametric, that is, the linear time-varying structures are represented by a time-dependent common-denominator model. To adapt the existing frequency-domain estimator for time-invariant structures to the time-frequency methods for time-varying cases, an orthogonal polynomial and z-domain mapping hybrid basis function is presented, which has the advantageous numerical condition and with which it is convenient to calculate the modal parameters. A series of numerical examples have evaluated and illustrated the performance of the proposed maximum likelihood estimator, and a group of laboratory experiments has further validated the proposed estimator.
Koofigar, Hamid Reza
2016-01-01
The problem of maximum power point tracking (MPPT) in photovoltaic (PV) systems, despite the model uncertainties and the variations in environmental circumstances, is addressed. Introducing a mathematical description, an adaptive sliding mode control (ASMC) algorithm is first developed. Unlike many previous investigations, the output voltage is not required to be sensed and the upper bound of system uncertainties and the variations of irradiance and temperature are not required to be known. Estimating the output voltage by an update law, an adaptive-based H∞ tracking algorithm is then developed for the case the perturbations are energy-bounded. The stability analysis is presented for the proposed tracking control schemes, based on the Lyapunov stability theorem. From a comparison viewpoint, some numerical and experimental studies are also presented and discussed.
Noise Removal From Microarray Images Using Maximum a Posteriori Based Bivariate Estimator
A.Sharmila Agnal
2013-01-01
Full Text Available Microarray Image contains information about thousands of genes in an organism and these images are affected by several types of noises. They affect the circular edges of spots and thus degrade the image quality. Hence noise removal is the first step of cDNA microarray image analysis for obtaining gene expression level and identifying the infected cells. The Dual Tree Complex Wavelet Transform (DT-CWT is preferred for denoising microarray images due to its properties like improved directional selectivity and near shift-invariance. In this paper, bivariate estimators namely Linear Minimum Mean Squared Error (LMMSE and Maximum A Posteriori (MAP derived by applying DT-CWT are used for denoising microarray images. Experimental results show that MAP based denoising method outperforms existing denoising techniques for microarray images.
Jafarizadeh, M A; Sabric, H; Malekic, B Rashidian
2011-01-01
In this paper,a systematic study of quantum phase transition within U(5) \\leftrightarrow SO(6) limits is presented in terms of infinite dimensional Algebraic technique in the IBM framework. Energy level statistics are investigated with Maximum Likelihood Estimation (MLE) method in order to characterize transitional region. Eigenvalues of these systems are obtained by solving Bethe-Ansatz equations with least square fitting processes to experimental data to obtain constants of Hamiltonian. Our obtained results verify the dependence of Nearest Neighbor Spacing Distribution's (NNSD) parameter to control parameter (c_{s}) and also display chaotic behavior of transitional regions in comparing with both limits. In order to compare our results for two limits with both GUE and GOE ensembles, we have suggested a new NNSD distribution and have obtained better KLD distances for the new distribution in compared with others in both limits. Also in the case of N\\to\\infty, the total boson number dependence displays the univ...
The early maximum likelihood estimation model of audiovisual integration in speech perception
Andersen, Tobias
2015-01-01
Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk−MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely...... focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual......-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures...
WANG Yang; ZHAN Yi-chun; YU Shao-hua
2007-01-01
This paper investigates the routing among autonomous systems (ASs) with quality of service (QoS) requirements. To avoid the intractability of the problem, abstract QoS capability must be informed among ASs, because the routhing which constrained QoS has been proved to be nondeterministic polynomial-time (NP) hard even inside an AS. This paper employs the modified Dijkstra algorithm to compute the maximum bottleneck bandwidth inside an AS. This approach lays a basis for the AS-level switching capability on which interdomain advertisement can be performed. Furthermore, the paper models the aggregated traffic in backbone network with fractional Brownian motion (FBM), and by integrating along the time axis in short intervals, a good estimation of the distribution of queue length in the next short intervals can be obtained. The proposed advertisement mechanism can be easily implemented with the current interdomain routing protocols. Numerical study indicates that the presented scheme is effective and feasible.
Concept for estimating mitochondrial DNA haplogroups using a maximum likelihood approach (EMMA)☆
Röck, Alexander W.; Dür, Arne; van Oven, Mannis; Parson, Walther
2013-01-01
The assignment of haplogroups to mitochondrial DNA haplotypes contributes substantial value for quality control, not only in forensic genetics but also in population and medical genetics. The availability of Phylotree, a widely accepted phylogenetic tree of human mitochondrial DNA lineages, led to the development of several (semi-)automated software solutions for haplogrouping. However, currently existing haplogrouping tools only make use of haplogroup-defining mutations, whereas private mutations (beyond the haplogroup level) can be additionally informative allowing for enhanced haplogroup assignment. This is especially relevant in the case of (partial) control region sequences, which are mainly used in forensics. The present study makes three major contributions toward a more reliable, semi-automated estimation of mitochondrial haplogroups. First, a quality-controlled database consisting of 14,990 full mtGenomes downloaded from GenBank was compiled. Together with Phylotree, these mtGenomes serve as a reference database for haplogroup estimates. Second, the concept of fluctuation rates, i.e. a maximum likelihood estimation of the stability of mutations based on 19,171 full control region haplotypes for which raw lane data is available, is presented. Finally, an algorithm for estimating the haplogroup of an mtDNA sequence based on the combined database of full mtGenomes and Phylotree, which also incorporates the empirically determined fluctuation rates, is brought forward. On the basis of examples from the literature and EMPOP, the algorithm is not only validated, but both the strength of this approach and its utility for quality control of mitochondrial haplotypes is also demonstrated. PMID:23948335
Carlos A. L. Pires
2013-02-01
Full Text Available The Minimum Mutual Information (MinMI Principle provides the least committed, maximum-joint-entropy (ME inferential law that is compatible with prescribed marginal distributions and empirical cross constraints. Here, we estimate MI bounds (the MinMI values generated by constraining sets Tcr comprehended by mcr linear and/or nonlinear joint expectations, computed from samples of N iid outcomes. Marginals (and their entropy are imposed by single morphisms of the original random variables. N-asymptotic formulas are given both for the distribution of cross expectation’s estimation errors, the MinMI estimation bias, its variance and distribution. A growing Tcr leads to an increasing MinMI, converging eventually to the total MI. Under N-sized samples, the MinMI increment relative to two encapsulated sets Tcr1 ⊂ Tcr2 (with numbers of constraints mcr1
Comparison of measured and estimated maximum skin doses during CT fluoroscopy lung biopsies
Zanca, F., E-mail: Federica.Zanca@med.kuleuven.be [Department of Radiology, Leuven University Center of Medical Physics in Radiology, UZ Leuven, Herestraat 49, 3000 Leuven, Belgium and Imaging and Pathology Department, UZ Leuven, Herestraat 49, Box 7003 3000 Leuven (Belgium); Jacobs, A. [Department of Radiology, Leuven University Center of Medical Physics in Radiology, UZ Leuven, Herestraat 49, 3000 Leuven (Belgium); Crijns, W. [Department of Radiotherapy, UZ Leuven, Herestraat 49, 3000 Leuven (Belgium); De Wever, W. [Imaging and Pathology Department, UZ Leuven, Herestraat 49, Box 7003 3000 Leuven, Belgium and Department of Radiology, UZ Leuven, Herestraat 49, 3000 Leuven (Belgium)
2014-07-15
Purpose: To measure patient-specific maximum skin dose (MSD) associated with CT fluoroscopy (CTF) lung biopsies and to compare measured MSD with the MSD estimated from phantom measurements, as well as with the CTDIvol of patient examinations. Methods: Data from 50 patients with lung lesions who underwent a CT fluoroscopy-guided biopsy were collected. The CT protocol consisted of a low-kilovoltage (80 kV) protocol used in combination with an algorithm for dose reduction to the radiology staff during the interventional procedure, HandCare (HC). MSD was assessed during each intervention using EBT2 gafchromic films positioned on patient skin. Lesion size, position, total fluoroscopy time, and patient-effective diameter were registered for each patient. Dose rates were also estimated at the surface of a normal-size anthropomorphic thorax phantom using a 10 cm pencil ionization chamber placed at every 30°, for a full rotation, with and without HC. Measured MSD was compared with MSD values estimated from the phantom measurements and with the cumulative CTDIvol of the procedure. Results: The median measured MSD was 141 mGy (range 38–410 mGy) while the median cumulative CTDIvol was 72 mGy (range 24–262 mGy). The ratio between the MSD estimated from phantom measurements and the measured MSD was 0.87 (range 0.12–4.1) on average. In 72% of cases the estimated MSD underestimated the measured MSD, while in 28% of the cases it overestimated it. The same trend was observed for the ratio of cumulative CTDIvol and measured MSD. No trend was observed as a function of patient size. Conclusions: On average, estimated MSD from dose rate measurements on phantom as well as from CTDIvol of patient examinations underestimates the measured value of MSD. This can be attributed to deviations of the patient's body habitus from the standard phantom size and to patient positioning in the gantry during the procedure.
Gianfrancesco, M A; Balzer, L; Taylor, K E; Trupin, L; Nititham, J; Seldin, M F; Singer, A W; Criswell, L A; Barcellos, L F
2016-09-01
Systemic lupus erythematous (SLE) is a chronic autoimmune disease associated with genetic and environmental risk factors. However, the extent to which genetic risk is causally associated with disease activity is unknown. We utilized longitudinal-targeted maximum likelihood estimation to estimate the causal association between a genetic risk score (GRS) comprising 41 established SLE variants and clinically important disease activity as measured by the validated Systemic Lupus Activity Questionnaire (SLAQ) in a multiethnic cohort of 942 individuals with SLE. We did not find evidence of a clinically important SLAQ score difference (>4.0) for individuals with a high GRS compared with those with a low GRS across nine time points after controlling for sex, ancestry, renal status, dialysis, disease duration, treatment, depression, smoking and education, as well as time-dependent confounding of missing visits. Individual single-nucleotide polymorphism (SNP) analyses revealed that 12 of the 41 variants were significantly associated with clinically relevant changes in SLAQ scores across time points eight and nine after controlling for multiple testing. Results based on sophisticated causal modeling of longitudinal data in a large patient cohort suggest that individual SLE risk variants may influence disease activity over time. Our findings also emphasize a role for other biological or environmental factors.
A label field fusion bayesian model and its penalized maximum rand estimator for image segmentation.
Mignotte, Max
2010-06-01
This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature.
Karbauskaitė Rasa
2015-12-01
Full Text Available One of the problems in the analysis of the set of images of a moving object is to evaluate the degree of freedom of motion and the angle of rotation. Here the intrinsic dimensionality of multidimensional data, characterizing the set of images, can be used. Usually, the image may be represented by a high-dimensional point whose dimensionality depends on the number of pixels in the image. The knowledge of the intrinsic dimensionality of a data set is very useful information in exploratory data analysis, because it is possible to reduce the dimensionality of the data without losing much information. In this paper, the maximum likelihood estimator (MLE of the intrinsic dimensionality is explored experimentally. In contrast to the previous works, the radius of a hypersphere, which covers neighbours of the analysed points, is fixed instead of the number of the nearest neighbours in the MLE. A way of choosing the radius in this method is proposed. We explore which metric—Euclidean or geodesic—must be evaluated in the MLE algorithm in order to get the true estimate of the intrinsic dimensionality. The MLE method is examined using a number of artificial and real (images data sets.
Carletta, Nicholas D.; Mullendore, Gretchen L.; Starzec, Mariusz; Xi, Baike; Feng, Zhe; Dong, Xiquan
2016-08-01
Convective mass transport is the transport of mass from near the surface up to the upper troposphere and lower stratosphere (UTLS) by a deep convective updraft. This transport can alter the chemical makeup and water vapor balance of the UTLS, which affects cloud formation and the radiative properties of the atmosphere. It is therefore important to understand the exact altitudes at which mass is detrained from convection. The purpose of this study was to improve upon previously published methodologies for estimating the level of maximum detrainment (LMD) within convection using data from a single ground-based radar. Four methods were used to identify the LMD and validated against dual-Doppler derived vertical mass divergence fields for six cases with a variety of storm types. The best method for locating the LMD was determined to be the method that used a reflectivity texture technique to determine convective cores and a multi-layer echo identification to determine anvil locations. Although an improvement over previously published methods, the new methodology still produced unreliable results in certain regimes. The methodology worked best when applied to mature updrafts, as the anvil needs time to grow to a detectable size. Thus, radar reflectivity is found to be valuable in estimating the LMD, but storm maturity must also be considered for best results.
MLGA: A SAS Macro to Compute Maximum Likelihood Estimators via Genetic Algorithms
Francisco Juretig
2015-08-01
Full Text Available Nonlinear regression is usually implemented in SAS either by using PROC NLIN or PROC NLMIXED. Apart from the model structure, initial values need to be specified for each parameter. And after some convergence criteria are fulfilled, the second order conditions need to be analyzed. But numerical problems are expected to appear in case the likelihood is nearly discontinuous, has plateaus, multiple maxima, or the initial values are distant from the true parameter estimates. The usual solution consists of using a grid, and then choosing the set of parameters reporting the highest log-likelihood. However, if the amount of parameters or grid points is large, the computational burden will be excessive. Furthermore, there is no guarantee that, as the number of grid points increases, an equal or better set of points will be found. Genetic algorithms can overcome these problems by replicating how nature optimizes its processes. The MLGA macro is presented; it solves a maximum likelihood estimation problem under normality through PROC GA, and the resulting values can later be used as the starting values in SAS nonlinear procedures. As will be demonstrated, this macro can avoid the usual trial and error approach that is needed when convergence problems arise. Finally, it will be shown how this macro can deal with complicated restrictions involving multiple parameters.
Maximum ikelihood estimation for the double-count method with independent observers
Manly, Bryan F.J.; McDonald, Lyman L.; Garner, Gerald W.
1996-01-01
Data collected under a double-count protocol during line transect surveys were analyzed using new maximum likelihood methods combined with Akaike's information criterion to provide estimates of the abundance of polar bear (Ursus maritimus Phipps) in a pilot study off the coast of Alaska. Visibility biases were corrected by modeling the detection probabilities using logistic regression functions. Independent variables that influenced the detection probabilities included perpendicular distance of bear groups from the flight line and the number of individuals in the groups. A series of models were considered which vary from (1) the simplest, where the probability of detection was the same for both observers and was not affected by either distance from the flight line or group size, to (2) models where probability of detection is different for the two observers and depends on both distance from the transect and group size. Estimation procedures are developed for the case when additional variables may affect detection probabilities. The methods are illustrated using data from the pilot polar bear survey and some recommendations are given for design of a survey over the larger Chukchi Sea between Russia and the United States.
Norhisam Misron
2016-08-01
Full Text Available A new control estimator to maximize the power generated with a maximum power point estimator is introduced. The power mapping characteristics from the double-stator generator are modeled as a mathematical equation which is used to develop the estimator for maximum power tracking to maximize the generated power. The proposed estimator automatically traces the instantaneous maximum power at various load conditions. However, to stabilize the output voltage, a boost converter is used from the inverter side. The developed double-stator generator is tested with the new estimator for the maximizing power generation capability under laboratory conditions. The experimental results confirm that with the new estimator, the average power generation capability is increased by 12% and the peak value is increase by 22%.
Mohamad Ridwan
2014-12-01
Full Text Available Jakarta is located on a thick sedimentary layer that potentially has a very high seismic wave amplification. However, the available information concerning the subsurface model and bedrock depth is insufficient for a seismic hazard analysis. In this study, a microtremor array method was applied to estimate the geometry and S-wave velocity of the sedimentary layer. The spatial autocorrelation (SPAC method was applied to estimate the dispersion curve, while the S-wave velocity was estimated using a genetic algorithm approach. The analysis of the 1D and 2D S-wave velocity profiles shows that along a north-south line, the sedimentary layer is thicker towards the north. It has a positive correlation with a geological cross section derived from a borehole down to a depth of about 300 m. The SPT data from the BMKG site were used to verify the 1D S-wave velocity profile. They show a good agreement. The microtremor analysis reached the engineering bedrock in a range from 359 to 608 m as depicted by a cross section in the north-south direction. The site class was also estimated at each site, based on the average S-wave velocity until 30 m depth. The sites UI to ISTN belong to class D (medium soil, while BMKG and ANCL belong to class E (soft soil.
Kaplan, D; Thong Hang, T
2007-01-22
The formula for Savannah River Site (SRS) saltstone includes {approx}25 wt% slag to create a reducing environment for mitigating the subsurface transport of several radionuclides, including Tc-99. Based on laboratory measurements and two-dimensional reactive transport calculations, it was estimated that the SRS saltstone waste form will maintain a reducing environment, and therefore its ability to sequester Tc-99, for well over 10,000 years. For example, it was calculated that {approx}16% of the saltstone reduction capacity would be consumed after 213,000 years. For purposes of comparison, a second calculation was presented that was based on entirely different assumptions (direct spectroscopic measurements and diffusion calculations). The results from this latter calculation were near identical to those from this study. Obtaining similar conclusions by two extremely different calculations and sets of assumptions provides additional credence to the conclusion that the saltstone will likely maintain a reducing environment in excess of 10,000 years.
Adom Giffin
2014-09-01
Full Text Available In this paper, we continue our efforts to show how maximum relative entropy (MrE can be used as a universal updating algorithm. Here, our purpose is to tackle a joint state and parameter estimation problem where our system is nonlinear and in a non-equilibrium state, i.e., perturbed by varying external forces. Traditional parameter estimation can be performed by using filters, such as the extended Kalman filter (EKF. However, as shown with a toy example of a system with first order non-homogeneous ordinary differential equations, assumptions made by the EKF algorithm (such as the Markov assumption may not be valid. The problem can be solved with exponential smoothing, e.g., exponentially weighted moving average (EWMA. Although this has been shown to produce acceptable filtering results in real exponential systems, it still cannot simultaneously estimate both the state and its parameters and has its own assumptions that are not always valid, for example when jump discontinuities exist. We show that by applying MrE as a filter, we can not only develop the closed form solutions, but we can also infer the parameters of the differential equation simultaneously with the means. This is useful in real, physical systems, where we want to not only filter the noise from our measurements, but we also want to simultaneously infer the parameters of the dynamics of a nonlinear and non-equilibrium system. Although there were many assumptions made throughout the paper to illustrate that EKF and exponential smoothing are special cases ofMrE, we are not “constrained”, by these assumptions. In other words, MrE is completely general and can be used in broader ways.
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE
Liu, Shaowen; Lei, Xiao; Feng, Changge; Hao, Chunyan
2016-07-01
Subsurface formation temperature in the Tarim Basin, northwest China, is vital for assessment of hydrocarbon generation and preservation, and of geothermal energy potential. However, it has not previously been well understood, due to poor data coverage and a lack of highly accurate temperature data. Here, we combined recently acquired steady-state temperature logging data with drill stem test temperature data and measured rock thermal properties, to investigate the geothermal regime and estimate the subsurface formation temperature at depth in the range of 1000-5000 m, together with temperatures at the lower boundary of each of four major Lower Paleozoic marine source rocks buried in this basin. Results show that heat flow of the Tarim Basin ranges between 26.2 and 66.1 mW/m2, with a mean of 42.5 ± 7.6 mW/m2; the geothermal gradient at depth of 3000 m varies from 14.9 to 30.2 °C/km, with a mean of 20.7 ± 2.9 °C/km. Formation temperature estimated at the depth of 1000 m is between 29 and 41 °C, with a mean of 35 °C, while 63-100 °C is for the temperature at the depth of 3000 m with a mean of 82 °C. Temperature at 5000 m ranges from 97 to 160 °C, with a mean of 129 °C. Generally spatial patterns of the subsurface formation temperature at depth are basically similar, characterized by higher temperatures in the uplift areas and lower temperatures in the sags, which indicates the influence of basement structure and lateral variations in thermal properties on the geotemperature field. Using temperature to identify the oil window in the source rocks, most of the uplifted areas in the basin are under favorable condition for oil generation and/or preservation, whereas the sags with thick sediments are favorable for gas generation and/or preservation. We conclude that relatively low present-day geothermal regime and large burial depth of the source rocks in the Tarim Basin are favorable for hydrocarbon generation and preservation. In addition, it is found that the
Estimation of the Probable Maximum Flood for a Small Lowland River in Poland
Banasik, K.; Hejduk, L.
2009-04-01
The planning, designe and use of hydrotechnical structures often requires the assesment of maximu flood potentials. The most common term applied to this upper limit of flooding is the probable maximum flood (PMF). The PMP/UH (probable maximum precipitation/unit hydrograph) method has been used in the study to predict PMF from a small agricultural lowland river basin of Zagozdzonka (left tributary of Vistula river) in Poland. The river basin, located about 100 km south of Warsaw, with an area - upstream the gauge of Plachty - of 82 km2, has been investigated by Department of Water Engineering and Environmenal Restoration of Warsaw University of Life Sciences - SGGW since 1962. Over 40-year flow record was used in previous investigation for predicting T-year flood discharge (Banasik et al., 2003). The objective here was to estimate the PMF using the PMP/UH method and to compare the results with the 100-year flood. A new relation of depth-duration curve of PMP for the local climatic condition has been developed based on Polish maximum observed rainfall data (Ozga-Zielinska & Ozga-Zielinski, 2003). Exponential formula, with the value of exponent of 0.47, i.e. close to the exponent in formula for world PMP and also in the formula of PMP for Great Britain (Wilson, 1993), gives the rainfall depth about 40% lower than the Wilson's one. The effective rainfall (runoff volume) has been estimated from the PMP of various duration using the CN-method (USDA-SCS, 1986). The CN value as well as parameters of the IUH model (Nash, 1957) have been established from the 27 rainfall-runoff events, recorded in the river basin in the period 1980-2004. Varibility of the parameter values with the size of the events will be discussed in the paper. The results of the analyse have shown that the peak discharge of the PMF is 4.5 times larger then 100-year flood, and volume ratio of the respective direct hydrographs caused by rainfall events of critical duration is 4.0. References 1.Banasik K
Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst
2011-11-01
When a neuronal spike train is observed, what can we deduce from it about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate-and-fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that, at least in principle, its unique global minimum can thus be found by gradient descent techniques. Many biological neurons are, however, known to generate a richer repertoire of spiking behaviors than can be explained in a simple integrate-and-fire model. For instance, such a model retains only an implicit (through spike-induced currents), not an explicit, memory of its input; an example of a physiological situation that cannot be explained is the absence of firing if the input current is increased very slowly. Therefore, we use an expanded model (Mihalas & Niebur, 2009 ), which is capable of generating a large number of complex firing patterns while still being linear. Linearity is important because it maintains the distribution of the random variables and still allows maximum likelihood methods to be used. In this study, we show that although convexity of the negative log-likelihood function is not guaranteed for this model, the minimum of this function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) usually reaches the global minimum.
Improved Estimation of Subsurface Magnetic Properties using Minimum Mean-Square Error Methods
Saether, Bjoern
1997-12-31
This thesis proposes an inversion method for the interpretation of complicated geological susceptibility models. The method is based on constrained Minimum Mean-Square Error (MMSE) estimation. The MMSE method allows the incorporation of available prior information, i.e., the geometries of the rock bodies and their susceptibilities. Uncertainties may be included into the estimation process. The computation exploits the subtle information inherent in magnetic data sets in an optimal way in order to tune the initial susceptibility model. The MMSE method includes a statistical framework that allows the computation not only of the estimated susceptibilities, given by the magnetic measurements, but also of the associated reliabilities of these estimations. This allows the evaluation of the reliabilities in the estimates before any measurements are made, an option, which can be useful for survey planning. The MMSE method has been tested on a synthetic data set in order to compare the effects of various prior information. When more information is given as input to the estimation, the estimated models come closer to the true model, and the reliabilities in their estimates are increased. In addition, the method was evaluated using a real geological model from a North Sea oil field, based on seismic data and well information, including susceptibilities. Given that the geometrical model is correct, the observed mismatch between the forward calculated magnetic anomalies and the measured anomalies causes changes in the susceptibility model, which may show features of interesting geological significance to the explorationists. Such magnetic anomalies may be due to small fractures and faults not detectable on seismic, or local geochemical changes due to the upward migration of water or hydrocarbons. 76 refs., 42 figs., 18 tabs.
Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Bosch, Johan G; Aja-Fernández, Santiago
2015-08-01
The strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle between the myocardial movement and the ultrasound beam should be small to provide reliable measures. This constraint makes it difficult to provide strain measures of the entire myocardium. Alternative non-Doppler techniques such as Speckle Tracking (ST) can provide strain measures without angle constraints. However, the spatial resolution and the noisy appearance of speckle still make the strain estimation a challenging task in EC. Several maximum likelihood approaches have been proposed to statistically characterize the behavior of speckle, which results in a better performance of speckle tracking. However, those models do not consider common transformations to achieve the final B-mode image (e.g. interpolation). This paper proposes a new maximum likelihood approach for speckle tracking which effectively characterizes speckle of the final B-mode image. Its formulation provides a diffeomorphic scheme than can be efficiently optimized with a second-order method. The novelty of the method is threefold: First, the statistical characterization of speckle generalizes conventional speckle models (Rayleigh, Nakagami and Gamma) to a more versatile model for real data. Second, the formulation includes local correlation to increase the efficiency of frame-to-frame speckle tracking. Third, a probabilistic myocardial tissue characterization is used to automatically identify more reliable myocardial motions. The accuracy and agreement assessment was evaluated on a set of 16 synthetic image sequences for three different scenarios: normal, acute ischemia and acute dyssynchrony. The proposed method was compared to six speckle tracking methods. Results revealed that the proposed method is the most
Estimation of Wild Fire Risk Area based on Climate and Maximum Entropy in Korean Peninsular
Kim, T.; Lim, C. H.; Song, C.; Lee, W. K.
2015-12-01
The number of forest fires and accompanying human injuries and physical damages has been increased by frequent drought. In this study, forest fire danger zone of Korea is estimated to predict and prepare for future forest fire hazard regions. The MaxEnt (Maximum Entropy) model is used to estimate the forest fire hazard region which estimates the probability distribution of the status. The MaxEnt model is primarily for the analysis of species distribution, but its applicability for various natural disasters is getting recognition. The detailed forest fire occurrence data collected by the MODIS for past 5 years (2010-2014) is used as occurrence data for the model. Also meteorology, topography, vegetation data are used as environmental variable. In particular, various meteorological variables are used to check impact of climate such as annual average temperature, annual precipitation, precipitation of dry season, annual effective humidity, effective humidity of dry season, aridity index. Consequently, the result was valid based on the AUC(Area Under the Curve) value (= 0.805) which is used to predict accuracy in the MaxEnt model. Also predicted forest fire locations were practically corresponded with the actual forest fire distribution map. Meteorological variables such as effective humidity showed the greatest contribution, and topography variables such as TWI (Topographic Wetness Index) and slope also contributed on the forest fire. As a result, the east coast and the south part of Korea peninsula were predicted to have high risk on the forest fire. In contrast, high-altitude mountain area and the west coast appeared to be safe with the forest fire. The result of this study is similar with former studies, which indicates high risks of forest fire in accessible area and reflects climatic characteristics of east and south part in dry season. To sum up, we estimated the forest fire hazard zone with existing forest fire locations and environment variables and had
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE
Quantifying uncertainty in modelled estimates of annual maximum precipitation: confidence intervals
Panagoulia, Dionysia; Economou, Polychronis; Caroni, Chrys
2016-04-01
The possible nonstationarity of the GEV distribution fitted to annual maximum precipitation under climate change is a topic of active investigation. Of particular significance is how best to construct confidence intervals for items of interest arising from stationary/nonstationary GEV models.We are usually not only interested in parameter estimates but also in quantiles of the GEV distribution and it might be expected that estimates of extreme upper quantiles are far from being normally distributed even for moderate sample sizes.Therefore, we consider constructing confidence intervals for all quantities of interest by bootstrap methods based on resampling techniques. To this end, we examined three bootstrapping approaches to constructing confidence intervals for parameters and quantiles: random-t resampling, fixed-t resampling and the parametric bootstrap. Each approach was used in combination with the normal approximation method, percentile method, basic bootstrap method and bias-corrected method for constructing confidence intervals. We found that all the confidence intervals for the stationary model parameters have similar coverage and mean length. Confidence intervals for the more extreme quantiles tend to become very wide for all bootstrap methods. For nonstationary GEV models with linear time dependence of location or log-linear time dependence of scale, confidence interval coverage probabilities are reasonably accurate for the parameters. For the extreme percentiles, the bias-corrected and accelerated method is best overall, and the fixed-t method also has good average coverage probabilities. Reference: Panagoulia D., Economou P. and Caroni C., Stationary and non-stationary GEV modeling of extreme precipitation over a mountainous area under climate change, Environmetrics, 25 (1), 29-43, 2014.
Sonali Sachin Sankpal
2016-01-01
Full Text Available Scattering and absorption of light is main reason for limited visibility in water. The suspended particles and dissolved chemical compounds in water are also responsible for scattering and absorption of light in water. The limited visibility in water results in degradation of underwater images. The visibility can be increased by using artificial light source in underwater imaging system. But the artificial light illuminates the scene in a nonuniform fashion. It produces bright spot at the center with the dark region at surroundings. In some cases imaging system itself creates dark region in the image by producing shadow on the objects. The problem of nonuniform illumination is neglected by the researchers in most of the image enhancement techniques of underwater images. Also very few methods are discussed showing the results on color images. This paper suggests a method for nonuniform illumination correction for underwater images. The method assumes that natural underwater images are Rayleigh distributed. This paper used maximum likelihood estimation of scale parameter to map distribution of image to Rayleigh distribution. The method is compared with traditional methods for nonuniform illumination correction using no-reference image quality metrics like average luminance, average information entropy, normalized neighborhood function, average contrast, and comprehensive assessment function.
Jayajit Das '
2015-07-01
Full Text Available A common statistical situation concerns inferring an unknown distribution Q(x from a known distribution P(y, where X (dimension n, and Y (dimension m have a known functional relationship. Most commonly, n ≤ m, and the task is relatively straightforward for well-defined functional relationships. For example, if Y1 and Y2 are independent random variables, each uniform on [0, 1], one can determine the distribution of X = Y1 + Y2; here m = 2 and n = 1. However, biological and physical situations can arise where n > m and the functional relation Y→X is non-unique. In general, in the absence of additional information, there is no unique solution to Q in those cases. Nevertheless, one may still want to draw some inferences about Q. To this end, we propose a novel maximum entropy (MaxEnt approach that estimates Q(x based only on the available data, namely, P(y. The method has the additional advantage that one does not need to explicitly calculate the Lagrange multipliers. In this paper we develop the approach, for both discrete and continuous probability distributions, and demonstrate its validity. We give an intuitive justification as well, and we illustrate with examples.
2008-01-01
In this paper,we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE) concerning the quasi-likelihood equation in=1 Xi(yi-μ(Xiβ)) = 0 for univariate generalized linear model E(y |X) = μ(X’β).Given uncorrelated residuals {ei = Yi-μ(Xiβ0),1 i n} and other conditions,we prove that βn-β0 = Op(λn-1/2) holds,where βn is a root of the above equation,β0 is the true value of parameter β and λn denotes the smallest eigenvalue of the matrix Sn = ni=1 XiXi.We also show that the convergence rate above is sharp,provided independent non-asymptotically degenerate residual sequence and other conditions.Moreover,paralleling to the elegant result of Drygas(1976) for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is Sn-1→ 0,as the sample size n →∞.
ZHANG SanGuo; LIAO Yuan
2008-01-01
In this paper, we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE)concerning the quasi-likelihood equation ∑ni=1 Xi(yi-μ(X1iβ)) =0 for univariate generalized linear model E(y|X) =μ(X1β). Given uncorrelated residuals{ei=Yi-μ(X1iβ0), 1≤i≤n}and other conditions, we prove that (β)n-β0=Op(λ--1/2n)holds, where (β)n is a root of the above equation,β0 is the true value of parameter β and λ-n denotes the smallest eigenvalue of the matrix Sn=Σni=1 XiX1i. We also show that the convergence rate above is sharp, provided independent nonasymptotically degenerate residual sequence and other conditions. Moreover, paralleling to the elegant result of Drygas(1976)for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is S-1n→0, as the sample size n→∞.
Maximum entropy estimation of a Benzene contaminated plume using ecotoxicological assays.
Wahyudi, Agung; Bartzke, Mariana; Küster, Eberhard; Bogaert, Patrick
2013-01-01
Ecotoxicological bioassays, e.g. based on Danio rerio teratogenicity (DarT) or the acute luminescence inhibition with Vibrio fischeri, could potentially lead to significant benefits for detecting on site contaminations on qualitative or semi-quantitative bases. The aim was to use the observed effects of two ecotoxicological assays for estimating the extent of a Benzene groundwater contamination plume. We used a Maximum Entropy (MaxEnt) method to rebuild a bivariate probability table that links the observed toxicity from the bioassays with Benzene concentrations. Compared with direct mapping of the contamination plume as obtained from groundwater samples, the MaxEnt concentration map exhibits on average slightly higher concentrations though the global pattern is close to it. This suggest MaxEnt is a valuable method to build a relationship between quantitative data, e.g. contaminant concentrations, and more qualitative or indirect measurements, in a spatial mapping framework, which is especially useful when clear quantitative relation is not at hand.
A maximum noise fraction transform with improved noise estimation for hyperspectral images
LIU Xiang; ZHANG Bing; GAO LianRu; CHEN DongMei
2009-01-01
Feature extraction is often performed to reduce spectral dimension of hyperspectral images before image classification.The maximum noise fraction (MNF) transform is one of the most commonly used spectral feature extraction methods.The spectral features in several bands of hyperspectral images are submerged by the noise.The MNF transform is advantageous over the principle component (PC) transform because it takes the noise information in the spatial domain into consideration.However,the experiments described in this paper demonstrate that classification accuracy is greatly influenced by the MNF transform when the ground objects are mixed together.The underlying mechanism of it is revealed and analyzed by mathematical theory.In order to improve the performance of classification after feature extraction when ground objects are mixed in hyperspectral images,a new MNF transform,with an Improved method of estimating hyperspectral Image noise covariance matrix (NCM),is presented.This improved MNF transform is applied to both the simulated data and real data.The results show that compared with the classical MNF transform,this new method enhanced the ability of feature extraction and increased classification accuracy.
Optimized Large-Scale CMB Likelihood And Quadratic Maximum Likelihood Power Spectrum Estimation
Gjerløw, E; Eriksen, H K; Górski, K M; Gruppuso, A; Jewell, J B; Plaszczynski, S; Wehus, I K
2015-01-01
We revisit the problem of exact CMB likelihood and power spectrum estimation with the goal of minimizing computational cost through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al.\\ (1997), and here we develop it into a fully working computational framework for large-scale polarization analysis, adopting \\WMAP\\ as a worked example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked \\WMAP\\ sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8\\% at $\\ell\\le32$, and a...
Lussana, C.
2013-04-01
The presented work focuses on the investigation of gridded daily minimum (TN) and maximum (TX) temperature probability density functions (PDFs) with the intent of both characterising a region and detecting extreme values. The empirical PDFs estimation procedure has been realised using the most recent years of gridded temperature analysis fields available at ARPA Lombardia, in Northern Italy. The spatial interpolation is based on an implementation of Optimal Interpolation using observations from a dense surface network of automated weather stations. An effort has been made to identify both the time period and the spatial areas with a stable data density otherwise the elaboration could be influenced by the unsettled station distribution. The PDF used in this study is based on the Gaussian distribution, nevertheless it is designed to have an asymmetrical (skewed) shape in order to enable distinction between warming and cooling events. Once properly defined the occurrence of extreme events, it is possible to straightforwardly deliver to the users the information on a local-scale in a concise way, such as: TX extremely cold/hot or TN extremely cold/hot.
Ungar, Eugene K.; Richards, W. Lance
2015-01-01
The aircraft-based Stratospheric Observatory for Infrared Astronomy (SOFIA) is a platform for multiple infrared astronomical observation experiments. These experiments carry sensors cooled to liquid helium temperatures. The liquid helium supply is contained in large (i.e., 10 liters or more) vacuum-insulated dewars. Should the dewar vacuum insulation fail, the inrushing air will condense and freeze on the dewar wall, resulting in a large heat flux on the dewar's contents. The heat flux results in a rise in pressure and the actuation of the dewar pressure relief system. A previous NASA Engineering and Safety Center (NESC) assessment provided recommendations for the wall heat flux that would be expected from a loss of vacuum and detailed an appropriate method to use in calculating the maximum pressure that would occur in a loss of vacuum event. This method involved building a detailed supercritical helium compressible flow thermal/fluid model of the vent stack and exercising the model over the appropriate range of parameters. The experimenters designing science instruments for SOFIA are not experts in compressible supercritical flows and do not generally have access to the thermal/fluid modeling packages that are required to build detailed models of the vent stacks. Therefore, the SOFIA Program engaged the NESC to develop a simplified methodology to estimate the maximum pressure in a liquid helium dewar after the loss of vacuum insulation. The method would allow the university-based science instrument development teams to conservatively determine the cryostat's vent neck sizing during preliminary design of new SOFIA Science Instruments. This report details the development of the simplified method, the method itself, and the limits of its applicability. The simplified methodology provides an estimate of the dewar pressure after a loss of vacuum insulation that can be used for the initial design of the liquid helium dewar vent stacks. However, since it is not an exact
Beyond QALYs: Multi-criteria based estimation of maximum willingness to pay for health technologies.
Nord, Erik
2017-03-03
The QALY is a useful outcome measure in cost-effectiveness analysis. But in determining the overall value of and societal willingness to pay for health technologies, gains in quality of life and length of life are prima facie separate criteria that need not be put together in a single concept. A focus on costs per QALY can also be counterproductive. One reason is that the QALY does not capture well the value of interventions in patients with reduced potentials for health and thus different reference points. Another reason is a need to separate losses of length of life and losses of quality of life when it comes to judging the strength of moral claims on resources in patients of different ages. An alternative to the cost-per-QALY approach is outlined, consisting in the development of two bivariate value tables that may be used in combination to estimate maximum cost acceptance for given units of treatment-for instance a surgical procedure, or 1 year of medication-rather than for 'obtaining one QALY.' The approach is a follow-up of earlier work on 'cost value analysis.' It draws on work in the QALY field insofar as it uses health state values established in that field. But it does not use these values to weight life years and thus avoids devaluing gained life years in people with chronic illness or disability. Real tables of the kind proposed could be developed in deliberative processes among policy makers and serve as guidance for decision makers involved in health technology assessment and appraisal.
Estimating the Effect of Competition on Trait Evolution Using Maximum Likelihood Inference.
Drury, Jonathan; Clavel, Julien; Manceau, Marc; Morlon, Hélène
2016-07-01
Many classical ecological and evolutionary theoretical frameworks posit that competition between species is an important selective force. For example, in adaptive radiations, resource competition between evolving lineages plays a role in driving phenotypic diversification and exploration of novel ecological space. Nevertheless, current models of trait evolution fit to phylogenies and comparative data sets are not designed to incorporate the effect of competition. The most advanced models in this direction are diversity-dependent models where evolutionary rates depend on lineage diversity. However, these models still treat changes in traits in one branch as independent of the value of traits on other branches, thus ignoring the effect of species similarity on trait evolution. Here, we consider a model where the evolutionary dynamics of traits involved in interspecific interactions are influenced by species similarity in trait values and where we can specify which lineages are in sympatry. We develop a maximum likelihood based approach to fit this model to combined phylogenetic and phenotypic data. Using simulations, we demonstrate that the approach accurately estimates the simulated parameter values across a broad range of parameter space. Additionally, we develop tools for specifying the biogeographic context in which trait evolution occurs. In order to compare models, we also apply these biogeographic methods to specify which lineages interact sympatrically for two diversity-dependent models. Finally, we fit these various models to morphological data from a classical adaptive radiation (Greater Antillean Anolis lizards). We show that models that account for competition and geography perform better than other models. The matching competition model is an important new tool for studying the influence of interspecific interactions, in particular competition, on phenotypic evolution. More generally, it constitutes a step toward a better integration of interspecific
Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun
2002-01-01
Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)
2014-12-12
TYPE Journal Article 3. DATES COVERED (From - To) 01 Oct 2014 – 30 Nov 2014 4. TITLE AND SUBTITLE Estimate of Solar Maximum Using the 1–8 Å...predict the intensity and date of the solar maximum of the current solar cycle. The solar cycle 24 prediction panel3 (Biesecker & Prediction Panel 2007...statement of the solar cycle 24 prediction panel is available at http://www.swpc.noaa.gov/SolarCycle/SC24/. 2. DETERMINATION OF THE SOLAR CYCLE
El Gharamti, Mohamad
2012-04-01
Accurate knowledge of the movement of contaminants in porous media is essential to track their trajectory and later extract them from the aquifer. A two-dimensional flow model is implemented and then applied on a linear contaminant transport model in the same porous medium. Because of different sources of uncertainties, this coupled model might not be able to accurately track the contaminant state. Incorporating observations through the process of data assimilation can guide the model toward the true trajectory of the system. The Kalman filter (KF), or its nonlinear invariants, can be used to tackle this problem. To overcome the prohibitive computational cost of the KF, the singular evolutive Kalman filter (SEKF) and the singular fixed Kalman filter (SFKF) are used, which are variants of the KF operating with low-rank covariance matrices. Experimental results suggest that under perfect and imperfect model setups, the low-rank filters can provide estimates as accurate as the full KF but at much lower computational effort. Low-rank filters are demonstrated to significantly reduce the computational effort of the KF to almost 3%. © 2012 American Society of Civil Engineers.
James O Lloyd-Smith
Full Text Available BACKGROUND: The negative binomial distribution is used commonly throughout biology as a model for overdispersed count data, with attention focused on the negative binomial dispersion parameter, k. A substantial literature exists on the estimation of k, but most attention has focused on datasets that are not highly overdispersed (i.e., those with k>or=1, and the accuracy of confidence intervals estimated for k is typically not explored. METHODOLOGY: This article presents a simulation study exploring the bias, precision, and confidence interval coverage of maximum-likelihood estimates of k from highly overdispersed distributions. In addition to exploring small-sample bias on negative binomial estimates, the study addresses estimation from datasets influenced by two types of event under-counting, and from disease transmission data subject to selection bias for successful outbreaks. CONCLUSIONS: Results show that maximum likelihood estimates of k can be biased upward by small sample size or under-reporting of zero-class events, but are not biased downward by any of the factors considered. Confidence intervals estimated from the asymptotic sampling variance tend to exhibit coverage below the nominal level, with overestimates of k comprising the great majority of coverage errors. Estimation from outbreak datasets does not increase the bias of k estimates, but can add significant upward bias to estimates of the mean. Because k varies inversely with the degree of overdispersion, these findings show that overestimation of the degree of overdispersion is very rare for these datasets.
E. Waarts (Eric); M.A. Carree (Martin); B. Wierenga (Berend)
1991-01-01
textabstractThe authors build on the idea put forward by Shugan to infer product maps from scanning data. They demonstrate that the actual estimation procedure used by Shugan has several methodological problems and may yield unstable estimates. They propose an alternative estimation procedure, full-
Sung Woo Park
2015-03-01
Full Text Available The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs, the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.
P. Heydari
2016-02-01
Full Text Available Background: The maximum aerobic capacity (VO2max can be used to evaluate the cardio-pulmonary condition and to provide physiological balance between a person and his job. Objectives: The aim of this study was to estimate the maximum aerobic capacity and its associated factors among students of medical emergencies in Qazvin. Methods: This cross-sectional study was conducted in 36 male students of medical emergencies in Qazvin University of Medical Sciences, 2015. The Physical Activity Readiness Questionnaire (PAR-Q and demographic questionnaire were completed by the participants. The participants meeting the inclusion criteria were assessed using the Gerkin treadmill protocol. Data were analyzed using Mann-Whitney U test and Kruskal-Wallis. Findings: Mean maximum aerobic capacity was 1.94±0.27 L/min. The maximum aerobic capacity was associated with weight and height groups. There was significant positive correlation between maximal aerobic capacity and height, weight and body mass index. Conclusion: The Gerkin treadmill test is useful for estimation of the maximum aerobic capacity and the maximum working ability in students of medical emergencies.
Kieftenbeld, Vincent; Natesan, Prathiba
2012-01-01
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Rijmen, Frank
2009-01-01
Maximum marginal likelihood estimation of multidimensional item response theory (IRT) models has been hampered by the calculation of the multidimensional integral over the ability distribution. However, the researcher often has a specific hypothesis about the conditional (in)dependence relations among the latent variables. Exploiting these…
Maris, E.
1998-01-01
The sampling interpretation of confidence intervals and hypothesis tests is discussed in the context of conditional maximum likelihood estimation. Three different interpretations are discussed, and it is shown that confidence intervals constructed from the asymptotic distribution under the third sampling scheme discussed are valid for the first…
Kelderman, Henk
1992-01-01
In this paper algorithms are described for obtaining the maximum likelihood estimates of the parameters in loglinear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual cou
Speed Estimation in Geared Wind Turbines Using the Maximum Correlation Coefficient
Skrimpas, Georgios Alexandros; Marhadi, Kun S.; Jensen, Bogi Bech;
2015-01-01
to overcome the above mentioned issues. The high speed stage shaft angular velocity is calculated based on the maximum correlation coefficient between the 1 st gear mesh frequency of the last gearbox stage and a pure sinus tone of known frequency and phase. The proposed algorithm utilizes vibration signals...
Maximum-entropy parameter estimation for the k-NN modified value-difference kernel
Hendrickx, I.H.E.; van den Bosch, A.; Verbruggen, R.; Taatgen, N.; Schomaker, L.
2004-01-01
We introduce an extension of the modified value-difference kernel of $k$-nn by replacing the kernel's default class distribution matrix with the matrix produced by the maximum-entropy learning algorithm. This hybrid algorithm is tested on fifteen machine learning benchmark tasks, comparing the hybri
Maximum-Entropy Parameter Estimation for the k-nn Modified Value-Difference Kernel
Hendrickx, Iris; Bosch, Antal van den
2005-01-01
We introduce an extension of the modified value-difference kernel of k-nn by replacing the kernel's default class distribution matrix with the matrix produced by the maximum-entropy learning algorithm. This hybrid algorithm is tested on fifteen machine learning benchmark tasks, comparing the hybrid
Maximum likelihood estimation for Cox's regression model under nested case-control sampling
Scheike, Thomas Harder; Juul, Anders
2004-01-01
-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used...
Bakaev, Nikolai Yu.; Crouzeix, Michel; Thomee, Vidar
2006-01-01
In recent years several papers have been devoted to stability and smoothing properties in maximum-norm of finite element discretizations of parabolic problems. Using the theory of analytic semigroups it has been possible to rephrase such properties as bounds for the resolvent of the associated discr
R. van Mastrigt (Ron)
1990-01-01
textabstractThe contractility of the urinary bladder can be adequately described in terms of the parameters P0 (isometric pressure) and Vmax (maximum contraction velocity). In about 12% of urodynamic evaluations of patients these clinically relevant parameters can be calculated from pressure and flo
Maximum Likelihood Estimation of Nonlinear Structural Equation Models with Ignorable Missing Data
Lee, Sik-Yum; Song, Xin-Yuan; Lee, John C. K.
2003-01-01
The existing maximum likelihood theory and its computer software in structural equation modeling are established on the basis of linear relationships among latent variables with fully observed data. However, in social and behavioral sciences, nonlinear relationships among the latent variables are important for establishing more meaningful models…
Zhu, Ke; 10.1214/11-AOS895
2012-01-01
This paper investigates the asymptotic theory of the quasi-maximum exponential likelihood estimators (QMELE) for ARMA--GARCH models. Under only a fractional moment condition, the strong consistency and the asymptotic normality of the global self-weighted QMELE are obtained. Based on this self-weighted QMELE, the local QMELE is showed to be asymptotically normal for the ARMA model with GARCH (finite variance) and IGARCH errors. A formal comparison of two estimators is given for some cases. A simulation study is carried out to assess the performance of these estimators, and a real example on the world crude oil price is given.
Lee, Wonyul; Liu, Yufeng
2012-10-01
Multivariate regression is a common statistical tool for practical problems. Many multivariate regression techniques are designed for univariate response cases. For problems with multiple response variables available, one common approach is to apply the univariate response regression technique separately on each response variable. Although it is simple and popular, the univariate response approach ignores the joint information among response variables. In this paper, we propose three new methods for utilizing joint information among response variables. All methods are in a penalized likelihood framework with weighted L(1) regularization. The proposed methods provide sparse estimators of conditional inverse co-variance matrix of response vector given explanatory variables as well as sparse estimators of regression parameters. Our first approach is to estimate the regression coefficients with plug-in estimated inverse covariance matrices, and our second approach is to estimate the inverse covariance matrix with plug-in estimated regression parameters. Our third approach is to estimate both simultaneously. Asymptotic properties of these methods are explored. Our numerical examples demonstrate that the proposed methods perform competitively in terms of prediction, variable selection, as well as inverse covariance matrix estimation.
Lijuan Cui
2016-11-01
Full Text Available We monitored the water quality and hydrological conditions of a horizontal subsurface constructed wetland (HSSF-CW in Beijing, China, for two years. We simulated the area-based constant and the temperature coefficient with the first-order kinetic model. We examined the relationships between the nitrogen (N removal rate, N load, seasonal variations in the N removal rate, and environmental factors—such as the area-based constant, temperature, and dissolved oxygen (DO. The effluent ammonia (NH4+-N and nitrate (NO3−-N concentrations were significantly lower than the influent concentrations (p < 0.01, n = 38. The NO3−-N load was significantly correlated with the removal rate (R2 = 0.96, p < 0.01, but the NH4+-N load was not correlated with the removal rate (R2 = 0.02, p > 0.01. The area-based constants of NO3−-N and NH4+-N at 20 °C were 27 ± 26 (mean ± SD and 14 ± 10 m∙year−1, respectively. The temperature coefficients for NO3−-N and NH4+-N were estimated at 1.004 and 0.960, respectively. The area-based constants for NO3−-N and NH4+-N were not correlated with temperature (p > 0.01. The NO3−-N area-based constant was correlated with the corresponding load (R2 = 0.96, p < 0.01. The NH4+-N area rate was correlated with DO (R2 = 0.69, p < 0.01, suggesting that the factors that influenced the N removal rate in this wetland met Liebig’s law of the minimum.
Pilot power optimization for AF relaying using maximum likelihood channel estimation
Wang, Kezhi
2014-09-01
Bit error rates (BERs) for amplify-and-forward (AF) relaying systems with two different pilot-symbol-aided channel estimation methods, disintegrated channel estimation (DCE) and cascaded channel estimation (CCE), are derived in Rayleigh fading channels. Based on these BERs, the pilot powers at the source and at the relay are optimized when their total transmitting powers are fixed. Numerical results show that the optimized system has a better performance than other conventional nonoptimized allocation systems. They also show that the optimal pilot power in variable gain is nearly the same as that in fixed gain for similar system settings. andcopy; 2014 IEEE.
Giovana Mara Rodrigues Borges
2016-11-01
Full Text Available Knowledge of the probabilistic behavior of rainfall is extremely important to the design of drainage systems, dam spillways, and other hydraulic projects. This study therefore examined statistical models to predict annual daily maximum rainfall as well as models of heavy rain for the city of Formiga - MG. To do this, annual maximum daily rainfall data were ranked in decreasing order that best describes the statistical distribution by exceedance probability. Daily rainfall disaggregation methodology was used for the intense rain model studies and adjusted with Intensity-Duration-Frequency (IDF and Exponential models. The study found that the Gumbel model better adhered to the data regarding observed frequency as indicated by the Chi-squared test, and that the exponential model best conforms to the observed data to predict intense rains.
Maximum Entropy Principle Based Estimation of Performance Distribution in Queueing Theory
He, Dayi; Li, Ran; Huang, Qi; Lei, Ping
2014-01-01
In related research on queuing systems, in order to determine the system state, there is a widespread practice to assume that the system is stable and that distributions of the customer arrival ratio and service ratio are known information. In this study, the queuing system is looked at as a black box without any assumptions on the distribution of the arrival and service ratios and only keeping the assumption on the stability of the queuing system. By applying the principle of maximum entropy, the performance distribution of queuing systems is derived from some easily accessible indexes, such as the capacity of the system, the mean number of customers in the system, and the mean utilization of the servers. Some special cases are modeled and their performance distributions are derived. Using the chi-square goodness of fit test, the accuracy and generality for practical purposes of the principle of maximum entropy approach is demonstrated. PMID:25207992
Maximum entropy principle based estimation of performance distribution in queueing theory.
He, Dayi; Li, Ran; Huang, Qi; Lei, Ping
2014-01-01
In related research on queuing systems, in order to determine the system state, there is a widespread practice to assume that the system is stable and that distributions of the customer arrival ratio and service ratio are known information. In this study, the queuing system is looked at as a black box without any assumptions on the distribution of the arrival and service ratios and only keeping the assumption on the stability of the queuing system. By applying the principle of maximum entropy, the performance distribution of queuing systems is derived from some easily accessible indexes, such as the capacity of the system, the mean number of customers in the system, and the mean utilization of the servers. Some special cases are modeled and their performance distributions are derived. Using the chi-square goodness of fit test, the accuracy and generality for practical purposes of the principle of maximum entropy approach is demonstrated.
Maximum entropy principle based estimation of performance distribution in queueing theory.
Dayi He
Full Text Available In related research on queuing systems, in order to determine the system state, there is a widespread practice to assume that the system is stable and that distributions of the customer arrival ratio and service ratio are known information. In this study, the queuing system is looked at as a black box without any assumptions on the distribution of the arrival and service ratios and only keeping the assumption on the stability of the queuing system. By applying the principle of maximum entropy, the performance distribution of queuing systems is derived from some easily accessible indexes, such as the capacity of the system, the mean number of customers in the system, and the mean utilization of the servers. Some special cases are modeled and their performance distributions are derived. Using the chi-square goodness of fit test, the accuracy and generality for practical purposes of the principle of maximum entropy approach is demonstrated.
Yang Fengfan
2004-01-01
A new technique for turbo decoder is proposed by using a local subsidiary maximum likelihood decoding and a probability distributions family for the extrinsic information. The optimal distribution of the extrinsic information is dynamically specified for each component decoder.The simulation results show that the iterative decoder with the new technique outperforms that of the decoder with the traditional Gaussian approach for the extrinsic information under the same conditions.
Maximum a posteriori covariance estimation using a power inverse wishart prior
Nielsen, Søren Feodor; Sporring, Jon
2012-01-01
The estimation of the covariance matrix is an initial step in many multivariate statistical methods such as principal components analysis and factor analysis, but in many practical applications the dimensionality of the sample space is large compared to the number of samples, and the usual maximu...... class of prior distributions generalizing the inverse Wishart prior, discuss its properties, and demonstrate the estimator on simulated and real data....
Vasquez, R. P.; Klein, J. D.; Barton, J. J.; Grunthaner, F. J.
1981-01-01
A comparison is made between maximum-entropy spectral estimation and traditional methods of deconvolution used in electron spectroscopy. The maximum-entropy method is found to have higher resolution-enhancement capabilities and, if the broadening function is known, can be used with no adjustable parameters with a high degree of reliability. The method and its use in practice are briefly described, and a criterion is given for choosing the optimal order for the prediction filter based on the prediction-error power sequence. The method is demonstrated on a test case and applied to X-ray photoelectron spectra.
Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro
2017-10-01
The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r→Z transform.
Lin, Jen-Jen; Cheng, Jung-Yu; Huang, Li-Fei; Lin, Ying-Hsiu; Wan, Yung-Liang; Tsui, Po-Hsiang
2017-02-09
The Nakagami distribution is an approximation useful to the statistics of ultrasound backscattered signals for tissue characterization. Various estimators may affect the Nakagami parameter in the detection of changes in backscattered statistics. In particular, the moment-based estimator (MBE) and maximum likelihood estimator (MLE) are two primary methods used to estimate the Nakagami parameters of ultrasound signals. This study explored the effects of the MBE and different MLE approximations on Nakagami parameter estimations. Ultrasound backscattered signals of different scatterer number densities were generated using a simulation model, and phantom experiments and measurements of human liver tissues were also conducted to acquire real backscattered echoes. Envelope signals were employed to estimate the Nakagami parameters by using the MBE, first- and second-order approximations of MLE (MLE1 and MLE2, respectively), and Greenwood approximation (MLEgw) for comparisons. The simulation results demonstrated that, compared with the MBE and MLE1, the MLE2 and MLEgw enabled more stable parameter estimations with small sample sizes. Notably, the required data length of the envelope signal was 3.6 times the pulse length. The phantom and tissue measurement results also showed that the Nakagami parameters estimated using the MLE2 and MLEgw could simultaneously differentiate various scatterer concentrations with lower standard deviations and reliably reflect physical meanings associated with the backscattered statistics. Therefore, the MLE2 and MLEgw are suggested as estimators for the development of Nakagami-based methodologies for ultrasound tissue characterization.
MIN Htwe, Y. M.
2016-12-01
Myanmar has suffered many times from earthquake disasters and four times from tsunamis according to historical data. The purpose of this study is to estimate the tsunami arrival time and maximum tsunami wave amplitude for the Rakhine coast of Myanmar using the TUNAMI F1 model. In this study I calculate the tsunami arrival time and maximum tsunami wave amplitude based on a tsunamigenic earthquake source of moment magnitude 8.5 in the Arakan subduction zone off the west-coast of Myanmar, using the TUNAMI F1 model, selecting eight points on Rakhine coast. The model result indicates that the tsunami waves would first hit Kyaukpyu on the Rakhine coast about 0.05 minutes after the onset of a magnitude 8.5 earthquake, and the maximum tsunami wave amplitude would be 2.37 meters.
Tauson, A H; Chwalibog, André; Jakobsen, K
1998-01-01
Protein and energy metabolism in boars of different breeds, 10 each of Hampshire, Duroc and Danish Landrace was measured in balance and respiration experiments by means of indirect calorimetry in an open-air circulation system. Measurements were performed in four periods (Period I-IV) covering...... the body weight range from 25 to 100 kg. In order to achieve maximum protein retention (RP) a daily intake of digestible protein > 12 g/kg0.75 and metabolisable energy > 1100 kJ/kg0.75 was assumed to be necessary. Protein retention of Danish Landrace boars was inferior to that of Hampshire and Duroc boars...... in Periods III and IV, and therefore, 55 measurements on Hampshire and Duroc boars fulfilling the chosen criteria for digested protein and ME intake were used for calculation of maximum protein retention, giving the following significant quadratic relationship: RP [g/d] = 11.43.W0.75-0.144.W1.50 (n = 55, RSD...
Probable maximum precipitation 24 hours estimation: A case study of Zanjan province of Iran
Azim Shirdeli
2012-10-01
Full Text Available One of the primary concerns in designing civil structures such as water storage dams and irrigation and drainage networks is to find economic scale based on possibility of natural incidents such as floods, earthquake, etc. Probable maximum precipitation (PMP is one of well known methods, which helps design a civil structure, properly. In this paper, we study the maximum one-day precipitation using 17 to 50 years of information in 13 stations located in province of Zanjan, Iran. The proposed study of this paper uses two Hershfield methods, where the first one yields 18.17 to 18.48 for precipitation where the PMP24 was between 170.14 mm and 255.28 mm. The second method reports precipitation between 2.29 and 4.95 while PMP24 was between 62.33 mm and 92.08 mm. In addition, when the out of range data were deleted from the study of the second method, precipitation rates were calculated between 2.29 and 4.31 while PMP24 was between 76.08 mm and 117.28 mm. The preliminary results indicate that the second Hershfield method provide more stable results than the first one.
Peng, Hongtao; Lei, Tingwu; Jiang, Zhiyun; Horton, Robert
2016-06-01
Mulching of agricultural fields and gardens with pebbles has long been practiced to conserve soil moisture in some semi-arid regions with low precipitation. Rainfall interception by the pebble mulch itself is an important part of the computation of the water balance for the pebble mulched fields and gardens. The mean equivalent diameter (MED) was used to characterize the pebble size. The maximum static rainfall retention in pebble mulch is based on the water penetrating into the pores of pebbles, the water adhering to the outside surfaces of pebbles and the water held between pebbles of the mulch. Equations describing the water penetrating into the pores of pebbles and the water adhering to the outside surface of pebbles are constructed based on the physical properties of water and the pebble characteristics. The model for the water between pebbles of the mulch is based on the basic equation to calculate the water bridge volume and the basic coordination number model. A method to calculate the maximum static rainfall retention in the pebble mulch is presented. Laboratory rain simulation experiments were performed to test the model with measured data. Paired sample t-tests showed no significant differences between the values calculated with the method and the measured data. The model is ready for testing on field mulches.
1990-11-01
findings contained in this report are thosE Df the author(s) and should not he construed as an official Department Df the Army position, policy , or...Marquardt methods" to perform linear and nonlinear estimations. One idea in this area by Box and Jenkins (1976) was the " backcasting " procedure to evaluate
Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood
Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.
2011-01-01
Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are
See, M T; Mabry, J W; Bertrand, J K
1993-11-01
Variance components for number of pigs born alive (NBA) were estimated from sow productivity field records collected by purebred breed associations. Data sets analyzed were as follows: Hampshire (n = 13,537), Landrace (n = 10,822), and Spotted (n = 3,949). Variance components for service sire, sire of sow, dam of sow, and residual effects on NBA (adjusted for parity) were estimated. The single-trait model included relationships between service sires, sires of sows, and dams of sows. The model was implemented using an expectation maximization (EM) REML algorithm. A sparse-matrix solver was also used. Heritability estimates for NBA were .13, .13, and .12 for Hampshire, Spotted, and Landrace, respectively. Estimates of maternal genetic (co)variances (m2) expressed as a proportion of the phenotypic variance were .05, .01, and .03 for Hampshire, Spotted, and Landrace, respectively. Results indicated that service sires account for 1 to 2% of the total variation for NBA. Genetic effects influencing NBA seem to be small in these data sets, but selection for increased NBA should be effective.
Cherchi, Elisabetta; Guevara, Cristian
2012-01-01
. In a series of Monte Carlo experiments, evidence suggested four main conclusions: (a) efficiency increased when the true variance-covariance matrix became diagonal, (b) EM was more robust to the curse of dimensionality in regard to efficiency and estimation time, (c) EM did not recover the true scale...
Estimating Water Demand in Urban Indonesia: A Maximum Likelihood Approach to block Rate Pricing Data
Rietveld, Piet; Rouwendal, Jan; Zwart, Bert
1997-01-01
In this paper the Burtless and Hausman model is used to estimate water demand in Salatiga, Indonesia. Other statistical models, as OLS and IV, are found to be inappropiate. A topic, which does not seem to appear in previous studies, is the fact that the density function of the loglikelihood can be m
Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood
Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.
2011-01-01
Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are upd
Estimating Water Demand in Urban Indonesia: A Maximum Likelihood Approach to block Rate Pricing Data
Rietveld, Piet; Rouwendal, Jan; Zwart, Bert
1997-01-01
In this paper the Burtless and Hausman model is used to estimate water demand in Salatiga, Indonesia. Other statistical models, as OLS and IV, are found to be inappropiate. A topic, which does not seem to appear in previous studies, is the fact that the density function of the loglikelihood can be
Fraternali, Fernando; Marcelli, Gianluca
2011-01-01
We present a meshfree method for the curvature estimation of membrane networks based on the Local Maximum Entropy approach recently presented in (Arroyo and Ortiz, 2006). A continuum regularization of the network is carried out by balancing the maximization of the information entropy corresponding to the nodal data, with the minimization of the total width of the shape functions. The accuracy and convergence properties of the given curvature prediction procedure are assessed through numerical applications to benchmark problems, which include coarse grained molecular dynamics simulations of the fluctuations of red blood cell membranes (Marcelli et al., 2005; Hale et al., 2009). We also provide an energetic discrete-to-continuum approach to the prediction of the zero-temperature bending rigidity of membrane networks, which is based on the integration of the local curvature estimates. The Local Maximum Entropy approach is easily applicable to the continuum regularization of fluctuating membranes, and the predict...
Falk, Carl F; Cai, Li
2016-06-01
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.
Kotera, Jan; Å roubek, Filip
2015-02-01
Single image blind deconvolution aims to estimate the unknown blur from a single observed blurred image and recover the original sharp image. Such task is severely ill-posed and typical approaches involve some heuristic or other steps without clear mathematical explanation to arrive at an acceptable solution. We show that a straight- forward maximum a posteriori estimation incorporating sparse priors and mechanism to deal with boundary artifacts, combined with an efficient numerical method can produce results which compete with or outperform much more complicated state-of-the-art methods. Our method is naturally extended to deal with overexposure in low-light photography, where linear blurring model is violated.
Roy Choudhury, Kingshuk; O'Sullivan, Finbarr; Kasman, Ian; Plowman, Greg D
2012-12-20
Measurements in tumor growth experiments are stopped once the tumor volume exceeds a preset threshold: a mechanism we term volume endpoint censoring. We argue that this type of censoring is informative. Further, least squares (LS) parameter estimates are shown to suffer a bias in a general parametric model for tumor growth with an independent and identically distributed measurement error, both theoretically and in simulation experiments. In a linear growth model, the magnitude of bias in the LS growth rate estimate increases with the growth rate and the standard deviation of measurement error. We propose a conditional maximum likelihood estimation procedure, which is shown both theoretically and in simulation experiments to yield approximately unbiased parameter estimates in linear and quadratic growth models. Both LS and maximum likelihood estimators have similar variance characteristics. In simulation studies, these properties appear to extend to the case of moderately dependent measurement error. The methodology is illustrated by application to a tumor growth study for an ovarian cancer cell line.
无
2008-01-01
Quasi-likelihood nonlinear models (QLNM) include generalized linear models as a special case.Under some regularity conditions,the rate of the strong consistency of the maximum quasi-likelihood estimation (MQLE) is obtained in QLNM.In an important case,this rate is O(n-1/2(loglogn)1/2),which is just the rate of LIL of partial sums for I.I.d variables,and thus cannot be improved anymore.
Nezhel'skaya, L. A.
2016-09-01
A flow of physical events (photons, electrons, and other elementary particles) is studied. One of the mathematical models of such flows is the modulated MAP flow of events circulating under conditions of unextendable dead time period. It is assumed that the dead time period is an unknown fixed value. The problem of estimation of the dead time period from observations of arrival times of events is solved by the method of maximum likelihood.
Maximum a posteriori estimation of crystallographic phases in X-ray diffraction tomography
Gürsoy, Doǧa; Bicer, Tekin; Almer, Jonathan D.; Kettimuthu, Rajkumar; Stock, Stuart; De Carlo, Francesco
2015-06-13
A maximum a posteriori approach is proposed for X-ray diffraction tomography for reconstructing three-dimensional spatial distribution of crystallographic phases and orientations of polycrystalline materials. The approach maximizes the a posteriori density which includes a Poisson log-likelihood and an a priori term that reinforces expected solution properties such as smoothness or local continuity. The reconstruction method is validated with experimental data acquired from a section of the spinous process of a porcine vertebra collected at the 1-ID-C beamline of the Advanced Photon Source, at Argonne National Laboratory. The reconstruction results show significant improvement in the reduction of aliasing and streaking artefacts, and improved robustness to noise and undersampling compared to conventional analytical inversion approaches. The approach has the potential to reduce data acquisition times, and significantly improve beamtime efficiency.
Liarte, Danilo B; Transtrum, Mark K; Catelani, Gianluigi; Liepe, Matthias; Sethna, James P
2016-01-01
We review our work on theoretical limits to the performance of superconductors in high magnetic fields parallel to their surfaces. These limits are of key relevance to current and future accelerating cavities, especially those made of new higher-$T_c$ materials such as Nb$_3$Sn, NbN, and MgB$_2$. We summarize our calculations of the so-called superheating field $H_{\\mathrm{sh}}$, beyond which flux will spontaneously penetrate even a perfect superconducting surface and ruin the performance. We briefly discuss experimental measurements of the superheating field, comparing to our estimates. We explore the effects of materials anisotropy and disorder. Will we need to control surface orientation in the layered compound MgB$_2$? Can we estimate theoretically whether dirt and defects make these new materials fundamentally more challenging to optimize than niobium? Finally, we discuss and analyze recent proposals to use thin superconducting layers or laminates to enhance the performance of superconducting cavities. T...
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
Maximum likelihood estimation in constrained parameter spaces for mixtures of factor analyzers
Greselin, Francesca; Ingrassia, Salvatore
2013-01-01
Mixtures of factor analyzers are becoming more and more popular in the area of model based clustering of high-dimensional data. According to the likelihood approach in data modeling, it is well known that the unconstrained log-likelihood function may present spurious maxima and singularities and this is due to specific patterns of the estimated covariance structure, when their determinant approaches 0. To reduce such drawbacks, in this paper we introduce a procedure for the parameter estimati...
Liarte, Danilo B.; Posen, Sam; Transtrum, Mark K.; Catelani, Gianluigi; Liepe, Matthias; Sethna, James P.
2017-03-01
Theoretical limits to the performance of superconductors in high magnetic fields parallel to their surfaces are of key relevance to current and future accelerating cavities, especially those made of new higher-T c materials such as Nb3Sn, NbN, and MgB2. Indeed, beyond the so-called superheating field {H}{sh}, flux will spontaneously penetrate even a perfect superconducting surface and ruin the performance. We present intuitive arguments and simple estimates for {H}{sh}, and combine them with our previous rigorous calculations, which we summarize. We briefly discuss experimental measurements of the superheating field, comparing to our estimates. We explore the effects of materials anisotropy and the danger of disorder in nucleating vortex entry. Will we need to control surface orientation in the layered compound MgB2? Can we estimate theoretically whether dirt and defects make these new materials fundamentally more challenging to optimize than niobium? Finally, we discuss and analyze recent proposals to use thin superconducting layers or laminates to enhance the performance of superconducting cavities. Flux entering a laminate can lead to so-called pancake vortices; we consider the physics of the dislocation motion and potential re-annihilation or stabilization of these vortices after their entry.
A new maximum likelihood blood velocity estimator incorporating spatial and temporal correlation
Schlaikjer, Malene; Jensen, Jørgen Arendt
2001-01-01
The blood flow in the human cardiovascular system obeys the laws of fluid mechanics. Investigation of the flow properties reveals that a correlation exists between the velocity in time and space. The possible changes in velocity are limited, since the blood velocity has a continuous profile in time...... of the observations gives a probability measure of the correlation between the velocities. Both the MLE and the STC-MLE have been evaluated on simulated and in-vivo RF-data obtained from the carotid artery. Using the MLE 4.1% of the estimates deviate significantly from the true velocities, when the performance...
Messier, Kyle P; Campbell, Ted; Bradley, Philip J; Serre, Marc L
2015-08-18
Radon ((222)Rn) is a naturally occurring chemically inert, colorless, and odorless radioactive gas produced from the decay of uranium ((238)U), which is ubiquitous in rocks and soils worldwide. Exposure to (222)Rn is likely the second leading cause of lung cancer after cigarette smoking via inhalation; however, exposure through untreated groundwater is also a contributing factor to both inhalation and ingestion routes. A land use regression (LUR) model for groundwater (222)Rn with anisotropic geological and (238)U based explanatory variables is developed, which helps elucidate the factors contributing to elevated (222)Rn across North Carolina. The LUR is also integrated into the Bayesian Maximum Entropy (BME) geostatistical framework to increase accuracy and produce a point-level LUR-BME model of groundwater (222)Rn across North Carolina including prediction uncertainty. The LUR-BME model of groundwater (222)Rn results in a leave-one out cross-validation r(2) of 0.46 (Pearson correlation coefficient = 0.68), effectively predicting within the spatial covariance range. Modeled results of (222)Rn concentrations show variability among intrusive felsic geological formations likely due to average bedrock (238)U defined on the basis of overlying stream-sediment (238)U concentrations that is a widely distributed consistently analyzed point-source data.
Mazza, Gina L; Enders, Craig K; Ruehlman, Linda S
2015-01-01
Often when participants have missing scores on one or more of the items comprising a scale, researchers compute prorated scale scores by averaging the available items. Methodologists have cautioned that proration may make strict assumptions about the mean and covariance structures of the items comprising the scale (Schafer & Graham, 2002 ; Graham, 2009 ; Enders, 2010 ). We investigated proration empirically and found that it resulted in bias even under a missing completely at random (MCAR) mechanism. To encourage researchers to forgo proration, we describe a full information maximum likelihood (FIML) approach to item-level missing data handling that mitigates the loss in power due to missing scale scores and utilizes the available item-level data without altering the substantive analysis. Specifically, we propose treating the scale score as missing whenever one or more of the items are missing and incorporating items as auxiliary variables. Our simulations suggest that item-level missing data handling drastically increases power relative to scale-level missing data handling. These results have important practical implications, especially when recruiting more participants is prohibitively difficult or expensive. Finally, we illustrate the proposed method with data from an online chronic pain management program.
Ali, Md. Ayub; Ohtsuki, Fumio
2000-05-01
An attempt was made to estimate the maximum increment age (MIA) in height and weight of Japanese boys and girls during the birth years 1893-1990 through the published data of the Ministry of Education, Science, Sports and Culture in Japan. In cases where the same maximum annual increment occurred in two or three successive age classes in a birth year cohort, a new formula (see Eq. 2) was developed to estimate the MIA. The existing formula for estimating MIA was modified to remove the mathematical deficiency (Eq. 1). Estimated MIA shows an overall declining trend, except in birth year cohorts 1934-1951. The effect of World War II on MIA was investigated by a dummy variable regression model. On average, during the birth years 1934-1951, MIA in height decelerated by 1.35 years in boys and 0.54 year in girls, while MIA in weight decelerated by 0.95 year in boys and 0.78 year in girls. Am. J. Hum. Biol. 12:363-370, 2000. Copyright 2000 Wiley-Liss, Inc.
Dang, Cuong Cao; Lefort, Vincent; Le, Vinh Sy; Le, Quang Si; Gascuel, Olivier
2011-10-01
Amino acid replacement rate matrices are an essential basis of protein studies (e.g. in phylogenetics and alignment). A number of general purpose matrices have been proposed (e.g. JTT, WAG, LG) since the seminal work of Margaret Dayhoff and co-workers. However, it has been shown that matrices specific to certain protein groups (e.g. mitochondrial) or life domains (e.g. viruses) differ significantly from general average matrices, and thus perform better when applied to the data to which they are dedicated. This Web server implements the maximum-likelihood estimation procedure that was used to estimate LG, and provides a number of tools and facilities. Users upload a set of multiple protein alignments from their domain of interest and receive the resulting matrix by email, along with statistics and comparisons with other matrices. A non-parametric bootstrap is performed optionally to assess the variability of replacement rate estimates. Maximum-likelihood trees, inferred using the estimated rate matrix, are also computed optionally for each input alignment. Finely tuned procedures and up-to-date ML software (PhyML 3.0, XRATE) are combined to perform all these heavy calculations on our clusters. http://www.atgc-montpellier.fr/ReplacementMatrix/ olivier.gascuel@lirmm.fr Supplementary data are available at http://www.atgc-montpellier.fr/ReplacementMatrix/
Wheeler, Russell L.
2014-01-01
Computation of probabilistic earthquake hazard requires an estimate of Mmax, the maximum earthquake magnitude thought to be possible within a specified geographic region. This report is Part A of an Open-File Report that describes the construction of a global catalog of moderate to large earthquakes, from which one can estimate Mmax for most of the Central and Eastern United States and adjacent Canada. The catalog and Mmax estimates derived from it were used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. This Part A discusses prehistoric earthquakes that occurred in eastern North America, northwestern Europe, and Australia, whereas a separate Part B deals with historical events.
Gu, Fei; Wu, Hao
2016-09-01
The specifications of state space model for some principal component-related models are described, including the independent-group common principal component (CPC) model, the dependent-group CPC model, and principal component-based multivariate analysis of variance. Some derivations are provided to show the equivalence of the state space approach and the existing Wishart-likelihood approach. For each model, a numeric example is used to illustrate the state space approach. In addition, a simulation study is conducted to evaluate the standard error estimates under the normality and nonnormality conditions. In order to cope with the nonnormality conditions, the robust standard errors are also computed. Finally, other possible applications of the state space approach are discussed at the end.
SUBSURFACE FACILITY WORKER DOES ASSESSMENT
V. Arakali; E. Faillace; A. Linden
2004-02-27
The purpose of this design calculation is to estimate radiation doses received by personnel working in the subsurface facility of the repository performing emplacement, maintenance, and retrieval operations under normal conditions. The results of this calculation will be used to support the design of the subsurface facilities and provide occupational dose estimates for the License Application.
Takagi, Hiroshi; Wu, Wenjie
2016-03-01
Even though the maximum wind radius (Rmax) is an important parameter in determining the intensity and size of tropical cyclones, it has been overlooked in previous storm surge studies. This study reviews the existing estimation methods for Rmax based on central pressure or maximum wind speed. These over- or underestimate Rmax because of substantial variations in the data, although an average radius can be estimated with moderate accuracy. As an alternative, we propose an Rmax estimation method based on the radius of the 50 kt wind (R50). Data obtained by a meteorological station network in the Japanese archipelago during the passage of strong typhoons, together with the JMA typhoon best track data for 1990-2013, enabled us to derive the following simple equation, Rmax = 0.23 R50. Application to a recent strong typhoon, the 2015 Typhoon Goni, confirms that the equation provides a good estimation of Rmax, particularly when the central pressure became considerably low. Although this new method substantially improves the estimation of Rmax compared to the existing models, estimation errors are unavoidable because of fundamental uncertainties regarding the typhoon's structure or insufficient number of available typhoon data. In fact, a numerical simulation for the 2013 Typhoon Haiyan as well as 2015 Typhoon Goni demonstrates a substantial difference in the storm surge height for different Rmax. Therefore, the variability of Rmax should be taken into account in storm surge simulations (e.g., Rmax = 0.15 R50-0.35 R50), independently of the model used, to minimize the risk of over- or underestimating storm surges. The proposed method is expected to increase the predictability of major storm surges and to contribute to disaster risk management, particularly in the western North Pacific, including countries such as Japan, China, Taiwan, the Philippines, and Vietnam.
Bellili, Faouzi; Meftehi, Rabii; Affes, Sofiene; Stephenne, Alex
2015-01-01
In this paper, we tackle for the first time the problem of maximum likelihood (ML) estimation of the signal-to-noise ratio (SNR) parameter over time-varying single-input multiple-output (SIMO) channels. Both the data-aided (DA) and the non-data-aided (NDA) schemes are investigated. Unlike classical techniques where the channel is assumed to be slowly time-varying and, therefore, considered as constant over the entire observation period, we address the more challenging problem of instantaneous (i.e., short-term or local) SNR estimation over fast time-varying channels. The channel variations are tracked locally using a polynomial-in-time expansion. First, we derive in closed-form expressions the DA ML estimator and its bias. The latter is subsequently subtracted in order to obtain a new unbiased DA estimator whose variance and the corresponding Cram\\'er-Rao lower bound (CRLB) are also derived in closed form. Due to the extreme nonlinearity of the log-likelihood function (LLF) in the NDA case, we resort to the expectation-maximization (EM) technique to iteratively obtain the exact NDA ML SNR estimates within very few iterations. Most remarkably, the new EM-based NDA estimator is applicable to any linearly-modulated signal and provides sufficiently accurate soft estimates (i.e., soft detection) for each of the unknown transmitted symbols. Therefore, hard detection can be easily embedded in the iteration loop in order to improve its performance at low to moderate SNR levels. We show by extensive computer simulations that the new estimators are able to accurately estimate the instantaneous per-antenna SNRs as they coincide with the DA CRLB over a wide range of practical SNRs.
A Regression Equation for the Estimation of Maximum Oxygen Uptake in Nepalese Adult Females
Chatterjee, Pinaki; Banerjee, Alok K; Das, Paulomi; Debnath, Parimal
2010-01-01
Purpose Validity of the 20-meter multi stage shuttle run test (20-m MST) has not been studied in Nepalese population. The purpose of this study was to validate the applicability of the 20-m MST in Nepalese adult females. Methods Forty female college students (age range, 20.42 ~24.75 years) from different colleges of Nepal were recruited for the study. Direct estimation of VO2 max comprised treadmill exercise followed by expired gas analysis by scholander micro-gas analyzer whereas VO2 max was indirectly predicted by the 20-m MST. Results The difference between the mean (±SD) VO2 max values of direct measurement (VO2 max = 32.78 +/-2.88 ml/kg/min) and the 20-m MST (SPVO2 max = 32.53 + /-3.36 ml/kg/min) was statistically insignificant (P>0.1). Highly significant correlation (r=0.94, PVO2 max. Limits of agreement analysis also suggest that the 20-m MST can be applied for the studied population. Conclusion The results of limits of agreement analysis suggest that the application of the present form of the 20-m MST may be justified in the studied population. However, for better prediction of VO2 max, a new equation has been computed based on the present data to be used for female college students of Nepal. PMID:22375191
Validity of heart rate based nomogram fors estimation of maximum oxygen uptake in Indian population.
Kumar, S Krishna; Khare, P; Jaryal, A K; Talwar, A
2012-01-01
Maximal oxygen uptake (VO2max) during a graded maximal exercise test is the objective method to assess cardiorespiratory fitness. Maximal oxygen uptake testing is limited to only a few laboratories as it requires trained personnel and strenuous effort by the subject. At the population level, submaximal tests have been developed to derive VO2max indirectly based on heart rate based nomograms or it can be calculated using anthropometric measures. These heart rate based predicted standards have been developed for western population and are used routinely to predict VO2max in Indian population. In the present study VO2max was directly measured by maximal exercise test using a bicycle ergometer and was compared with VO2max derived by recovery heart rate in Queen's College step test (QCST) (PVO2max I) and with VO2max derived from Wasserman equation based on anthropometric parameters and age (PVO2max II) in a well defined age group of healthy male adults from New Delhi. The values of directly measured VO2max showed no significant correlation either with the estimated VO2max with QCST or with VO2max predicted by Wasserman equation. Bland and Altman method of approach for limit of agreement between VO2max and PVO2max I or PVO2max II revealed that the limits of agreement between directly measured VO2max and PVO2max I or PVO2max II was large indicating inapplicability of prediction equations of western population in the population under study. Thus it is evident that there is an urgent need to develop nomogram for Indian population, may be even for different ethnic sub-population in the country.
LI Hongmei; SHI Xiaoyong; WANG Hao; HAN Xiurong
2014-01-01
According to historical mean ocean current data through the field observations of the Taiwan Ocean Re-search Institute during 1991-2005 and survey data of nutrients on the continental shelf of the East China Sea (ECS) in the summer of 2006, nutrient fluxes from the Taiwan Strait and Kuroshio subsurface waters are estimated using a grid interpolation method, which both are the sources of the Taiwan Warm Current. The nutrient fluxes of the two water masses are also compared. The results show that phosphate (PO4-P), silicate (SiO3-Si) and nitrate (NO3-N) fluxes to the ECS continental shelf from the Kuroshio upwelling water are slightly higher than those from the Taiwan Strait water in the summer of 2006. In contrast, owing to its lower velocity, the nutrient flux density (i.e., nutrient fluxes divided by the area of the specific section) of the Ku-roshio subsurface water is lower than that of the Taiwan Strait water. In addition, the Taiwan Warm Current deep water, which is mainly constituted by the Kuroshio subsurface water, might directly reach the areas of high-frequency harmful alga blooms in the ECS.
Bounds for Maximum Likelihood Regular and Non-Regular DoA Estimation in K-Distributed Noise
Abramovich, Yuri I.; Besson, Olivier; Johnson, Ben A.
2015-11-01
We consider the problem of estimating the direction of arrival of a signal embedded in $K$-distributed noise, when secondary data which contains noise only are assumed to be available. Based upon a recent formula of the Fisher information matrix (FIM) for complex elliptically distributed data, we provide a simple expression of the FIM with the two data sets framework. In the specific case of $K$-distributed noise, we show that, under certain conditions, the FIM for the deterministic part of the model can be unbounded, while the FIM for the covariance part of the model is always bounded. In the general case of elliptical distributions, we provide a sufficient condition for unboundedness of the FIM. Accurate approximations of the FIM for $K$-distributed noise are also derived when it is bounded. Additionally, the maximum likelihood estimator of the signal DoA and an approximated version are derived, assuming known covariance matrix: the latter is then estimated from secondary data using a conventional regularization technique. When the FIM is unbounded, an analysis of the estimators reveals a rate of convergence much faster than the usual $T^{-1}$. Simulations illustrate the different behaviors of the estimators, depending on the FIM being bounded or not.
Richards, V. M.; Dai, W.
2014-01-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given. PMID:24671826
Shen, Yi; Dai, Wei; Richards, Virginia M
2015-03-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.
Qing-ping Deng; Xue-jun Xu; Shu-min Shen
2000-01-01
This paper deals with Crouzeix-Raviart nonconforming finite element approxi mation of Navier-Stokes equation in a plane bounded domain, by using the so-called velocity-pressure mixed formulation. The quasi-optimal maximum norm error es timates of the velocity and its first derivatives and of the pressure are derived for nonconforming C-R scheme of stationary Navier-Stokes problem. The analysis is based on the weighted inf-sup condition and the technique of weighted Sobolev norm. By the way, the optimal L2-error estimate for nonconforming finite element approximation is obtained.
Castrillon, Julio
2015-11-10
We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic parameters of the model are filtered out thus enabling the estimation of the covariance parameters to be decoupled from the deterministic component. Moreover, the multi-level covariance matrix of the contrasts exhibit fast decay that is dependent on the smoothness of the covariance function. Due to the fast decay of the multi-level covariance matrix coefficients only a small set is computed with a level dependent criterion. We demonstrate our approach on problems of up to 512,000 observations with a Matérn covariance function and highly irregular placements of the observations. In addition, these problems are numerically unstable and hard to solve with traditional methods.
Iliff, K. W.; Maine, R. E.
1976-01-01
A maximum likelihood estimation method was applied to flight data and procedures to facilitate the routine analysis of a large amount of flight data were described. Techniques that can be used to obtain stability and control derivatives from aircraft maneuvers that are less than ideal for this purpose are described. The techniques involve detecting and correcting the effects of dependent or nearly dependent variables, structural vibration, data drift, inadequate instrumentation, and difficulties with the data acquisition system and the mathematical model. The use of uncertainty levels and multiple maneuver analysis also proved to be useful in improving the quality of the estimated coefficients. The procedures used for editing the data and for overall analysis are also discussed.
Callahan, R. P.; Taylor, N. J.; Pasquet, S.; Dueker, K. G.; Riebe, C. S.; Holbrook, W. S.
2016-12-01
Geophysical imaging is rapidly becoming popular for quantifying subsurface critical zone (CZ) architecture. However, a diverse array of measurements and measurement techniques are available, raising the question of which are appropriate for specific study goals. Here we compare two techniques for measuring S-wave velocities (Vs) in the near surface. The first approach quantifies Vs in three dimensions using a passive source and an iterative residual least-squares tomographic inversion. The second approach uses a more traditional active-source seismic survey to quantify Vs in two dimensions via a Monte Carlo surface-wave dispersion inversion. Our analysis focuses on three 0.01 km2 study plots on weathered granitic bedrock in the Southern Sierra Critical Zone Observatory. Preliminary results indicate that depth-averaged velocities from the two methods agree over the scales of resolution of the techniques. While the passive- and active-source techniques both quantify Vs, each method has distinct advantages and disadvantages during data acquisition and analysis. The passive-source method has the advantage of generating a three dimensional distribution of subsurface Vs structure across a broad area. Because this method relies on the ambient seismic field as a source, which varies unpredictably across space and time, data quality and depth of investigation are outside the control of the user. Meanwhile, traditional active-source surveys can be designed around a desired depth of investigation. However, they only generate a two dimensional image of Vs structure. Whereas traditional active-source surveys can be inverted quickly on a personal computer in the field, passive source surveys require significantly more computations, and are best conducted in a high-performance computing environment. We use data from our study sites to compare these methods across different scales and to explore how these methods can be used to better understand subsurface CZ architecture.
Sugiura, Yoshito; Hatanaka, Yasuhiko; Arai, Tomoaki; Sakurai, Hiroaki; Kanada, Yoshikiyo
2016-04-01
We aimed to investigate whether a linear regression formula based on the relationship between joint torque and angular velocity measured using a high-speed video camera and image measurement software is effective for estimating 1 repetition maximum (1RM) and isometric peak torque in knee extension. Subjects comprised 20 healthy men (mean ± SD; age, 27.4 ± 4.9 years; height, 170.3 ± 4.4 cm; and body weight, 66.1 ± 10.9 kg). The exercise load ranged from 40% to 150% 1RM. Peak angular velocity (PAV) and peak torque were used to estimate 1RM and isometric peak torque. To elucidate the relationship between force and velocity in knee extension, the relationship between the relative proportion of 1RM (% 1RM) and PAV was examined using simple regression analysis. The concordance rate between the estimated value and actual measurement of 1RM and isometric peak torque was examined using intraclass correlation coefficients (ICCs). Reliability of the regression line of PAV and % 1RM was 0.95. The concordance rate between the actual measurement and estimated value of 1RM resulted in an ICC(2,1) of 0.93 and that of isometric peak torque had an ICC(2,1) of 0.87 and 0.86 for 6 and 3 levels of load, respectively. Our method for estimating 1RM was effective for decreasing the measurement time and reducing patients' burden. Additionally, isometric peak torque can be estimated using 3 levels of load, as we obtained the same results as those reported previously. We plan to expand the range of subjects and examine the generalizability of our results.
Plan, Elodie L; Maloney, Alan; Mentré, France; Karlsson, Mats O; Bertrand, Julie
2012-09-01
Estimation methods for nonlinear mixed-effects modelling have considerably improved over the last decades. Nowadays, several algorithms implemented in different software are used. The present study aimed at comparing their performance for dose-response models. Eight scenarios were considered using a sigmoid E(max) model, with varying sigmoidicity and residual error models. One hundred simulated datasets for each scenario were generated. One hundred individuals with observations at four doses constituted the rich design and at two doses, the sparse design. Nine parametric approaches for maximum likelihood estimation were studied: first-order conditional estimation (FOCE) in NONMEM and R, LAPLACE in NONMEM and SAS, adaptive Gaussian quadrature (AGQ) in SAS, and stochastic approximation expectation maximization (SAEM) in NONMEM and MONOLIX (both SAEM approaches with default and modified settings). All approaches started first from initial estimates set to the true values and second, using altered values. Results were examined through relative root mean squared error (RRMSE) of the estimates. With true initial conditions, full completion rate was obtained with all approaches except FOCE in R. Runtimes were shortest with FOCE and LAPLACE and longest with AGQ. Under the rich design, all approaches performed well except FOCE in R. When starting from altered initial conditions, AGQ, and then FOCE in NONMEM, LAPLACE in SAS, and SAEM in NONMEM and MONOLIX with tuned settings, consistently displayed lower RRMSE than the other approaches. For standard dose-response models analyzed through mixed-effects models, differences were identified in the performance of estimation methods available in current software, giving material to modellers to identify suitable approaches based on an accuracy-versus-runtime trade-off.
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; Bianchini, Federico; Bleem, Lindsey E.; Crawford, Thomas M.; Holder, Gilbert P.; Manzotti, Alessandro; Reichardt, Christian L.
2017-08-01
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.
Kaiyu Wang
2014-01-01
Full Text Available This paper presents an efficient all digital carrier recovery loop (ADCRL for quadrature phase shift keying (QPSK. The ADCRL combines classic closed-loop carrier recovery circuit, all digital Costas loop (ADCOL, with frequency feedward loop, maximum likelihood frequency estimator (MLFE so as to make the best use of the advantages of the two types of carrier recovery loops and obtain a more robust performance in the procedure of carrier recovery. Besides, considering that, for MLFE, the accurate estimation of frequency offset is associated with the linear characteristic of its frequency discriminator (FD, the Coordinate Rotation Digital Computer (CORDIC algorithm is introduced into the FD based on MLFE to unwrap linearly phase difference. The frequency offset contained within the phase difference unwrapped is estimated by the MLFE implemented just using some shifter and multiply-accumulate units to assist the ADCOL to lock quickly and precisely. The joint simulation results of ModelSim and MATLAB show that the performances of the proposed ADCRL in locked-in time and range are superior to those of the ADCOL. On the other hand, a systematic design procedure based on FPGA for the proposed ADCRL is also presented.
Eberhard, Wynn L
2017-04-01
The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.
Wang, Kezhi
2014-10-01
Bit error rate (BER) and outage probability for amplify-and-forward (AF) relaying systems with two different channel estimation methods, disintegrated channel estimation and cascaded channel estimation, using pilot-aided maximum likelihood method in slowly fading Rayleigh channels are derived. Based on the BERs, the optimal values of pilot power under the total transmitting power constraints at the source and the optimal values of pilot power under the total transmitting power constraints at the relay are obtained, separately. Moreover, the optimal power allocation between the pilot power at the source, the pilot power at the relay, the data power at the source and the data power at the relay are obtained when their total transmitting power is fixed. Numerical results show that the derived BER expressions match with the simulation results. They also show that the proposed systems with optimal power allocation outperform the conventional systems without power allocation under the same other conditions. In some cases, the gain could be as large as several dB\\'s in effective signal-to-noise ratio.
Phuong Tran, Anh; Dafflon, Baptiste; Hubbard, Susan S.
2017-09-01
Quantitative characterization of soil organic carbon (OC) content is essential due to its significant impacts on surface-subsurface hydrological-thermal processes and microbial decomposition of OC, which both in turn are important for predicting carbon-climate feedbacks. While such quantification is particularly important in the vulnerable organic-rich Arctic region, it is challenging to achieve due to the general limitations of conventional core sampling and analysis methods, and to the extremely dynamic nature of hydrological-thermal processes associated with annual freeze-thaw events. In this study, we develop and test an inversion scheme that can flexibly use single or multiple datasets - including soil liquid water content, temperature and electrical resistivity tomography (ERT) data - to estimate the vertical distribution of OC content. Our approach relies on the fact that OC content strongly influences soil hydrological-thermal parameters and, therefore, indirectly controls the spatiotemporal dynamics of soil liquid water content, temperature and their correlated electrical resistivity. We employ the Community Land Model to simulate nonisothermal surface-subsurface hydrological dynamics from the bedrock to the top of canopy, with consideration of land surface processes (e.g., solar radiation balance, evapotranspiration, snow accumulation and melting) and ice-liquid water phase transitions. For inversion, we combine a deterministic and an adaptive Markov chain Monte Carlo (MCMC) optimization algorithm to estimate a posteriori distributions of desired model parameters. For hydrological-thermal-to-geophysical variable transformation, the simulated subsurface temperature, liquid water content and ice content are explicitly linked to soil electrical resistivity via petrophysical and geophysical models. We validate the developed scheme using different numerical experiments and evaluate the influence of measurement errors and benefit of joint inversion on the
Faris, Allison T.; Seed, Raymond B.; Kayen, Robert E.; Wu, Jiaer
2006-01-01
During the 1906 San Francisco Earthquake, liquefaction-induced lateral spreading and resultant ground displacements damaged bridges, buried utilities, and lifelines, conventional structures, and other developed works. This paper presents an improved engineering tool for the prediction of maximum displacement due to liquefaction-induced lateral spreading. A semi-empirical approach is employed, combining mechanistic understanding and data from laboratory testing with data and lessons from full-scale earthquake field case histories. The principle of strain potential index, based primary on correlation of cyclic simple shear laboratory testing results with in-situ Standard Penetration Test (SPT) results, is used as an index to characterized the deformation potential of soils after they liquefy. A Bayesian probabilistic approach is adopted for development of the final predictive model, in order to take fullest advantage of the data available and to deal with the inherent uncertainties intrinstiic to the back-analyses of field case histories. A case history from the 1906 San Francisco Earthquake is utilized to demonstrate the ability of the resultant semi-empirical model to estimate maximum horizontal displacement due to liquefaction-induced lateral spreading.
Espindola, J.
2010-12-01
The method of Carey and Sparks (1986) has been widely applied to estimate the hight of eruptive columns from the dispersal of the maximum clast size. These authors presented curves of maximum downwind range versus crosswind range for different clast diameters and wind speeds obtained from the numerical solution of a column model developed by Sparks(1986). An improved model of eruptive column was later developed by Woods (1988). In this work we present the results of the simulation of clast dispersal following the procedure of Carey and Sparks (1986) and the eruption column of Woods (1988). The numerical calculations were carried out with a code that computes the height of the column and the vertical velocity, the density and the radius along the column. The code determines then the support envelopes for a given clast size and their fall, after leaving the column, are computed from the equations of motion with viscous friction. For the same downwind and crosswind ranges, this method yields column heights about 10% smaller than the method of Carey and Sparks and about 20% higher wind velocities. The height of the crater above sea level plays also a small role in the results. We present comparisons for the 1982 eruption columns from El Chichon volcano. References Carey S and RSJ Sparks (1986) Bull. Volcanol. 48: 109-125 Sparks RSJ (1986) Bull. Volcanol. 48: 3-15 Woods AW (1988) Bull. Volcanol. 50: 169-193
Campbell, Bruce A.; Watters, Thomas R.
2016-02-01
Subsurface radar sounding observations by the Mars Advanced Radar for Subsurface and Ionospheric Sounding (MARSIS) and Shallow Radar (SHARAD) instruments are affected by ionospheric phase distortions that lead to image blurring and delay offsets. Based on experience with SHARAD image correction, we propose that ionospheric blurring in MARSIS radargrams may be compensated with a model of smoothly varying quadratic phase errors along the track. This method yields well-focused radargrams for geologic interpretation and allows analysis of the validity range for models used to derive total electron content (TEC) from phase distortion terms in previous MARSIS studies. The quadratic term appears to be a good proxy for TEC at solar zenith angles >65° for MARSIS Band 4 (5 MHz) and >75° for Band 3 (4 MHz). Comparison of MARSIS- and SHARAD-derived TEC values from 2007 to 2014 reveals correlations in seasonal behavior and in the characterization of ionospheric activity due to coronal mass ejections. We also present SHARAD and MARSIS evidence for a persistent region of anomalous radar scattering south of Arsia Mons. These echoes have been previously suggested to arise from refraction of the radar signal by electron density variations. There are no strong signatures, however, in the quadratic image compensation term correlated with the anomalous scattering, suggesting either that electron density variations responsible for refracted signal paths occur primarily in regions offset from the spacecraft track or that these density changes have a minimal impact on the integrated phase distortion of the subspacecraft footprint. We suggest observations and analyses to better constrain the mechanism and timing of such echoes.
Lu Lin
2009-10-01
Full Text Available Estimation of Distribution Algorithm (EDA is a new kinds of colony evolution algorithm, through counting excellent information of individuals of the present colony EDA construct probability distribution model, then sample the model produces newt generation. To solve the NP-Hard question as EDA searching optimum network structure a new Maximum Entropy Distribution Algorithm (MEEDA is provided. The algorithm takes Jaynes principle as the basis, makes use of the maximum entropy of random variables to estimate the minimum bias probability distribution of random variables, and then regard it as the evolution model of the algorithm, which produces the optimal/near optimal solution. Then this paper presents a rough programming model for job shop scheduling under uncertain information problem. The method overcomes the defects of traditional methods which need pre-set authorized characteristics or amount described attributes, designs multi-objective optimization mechanism and expands the application space of a rough set in the issue of job shop scheduling under uncertain information environment. Due to the complexity of the proposed model, traditional algorithms have low capability in producing a feasible solution. We use MEEDA in order to enable a definition of a solution within a reasonable amount of time. We assume that machine flexibility in processing operations to decrease the complexity of the proposed model. Muth and Thompson’s benchmark problems tests are used to verify and validate the proposed rough programming model and its algorithm. The computational results obtained by MEEDA are compared with GA. The compared results prove the effectiveness of MEEDA in the job shop scheduling problem under uncertain information environment.
Matilainen, Kaarina; Mäntysaari, Esa A; Lidauer, Martin H; Strandén, Ismo; Thompson, Robin
2013-01-01
Estimation of variance components by Monte Carlo (MC) expectation maximization (EM) restricted maximum likelihood (REML) is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR), where the information matrix was generated via sampling; MC average information(AI), where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.
Kaarina Matilainen
Full Text Available Estimation of variance components by Monte Carlo (MC expectation maximization (EM restricted maximum likelihood (REML is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR, where the information matrix was generated via sampling; MC average information(AI, where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.
Subsurface Contamination Control
Y. Yuan
2001-11-16
subsurface repository; (2) provides a table of derived LRCL for nuclides of radiological importance; (3) Provides an as low as is reasonably achievable (ALARA) evaluation of the derived LRCL by comparing potential onsite and offsite doses to documented ALARA requirements; (4) Provides a method for estimating potential releases from a defective WP; (5) Provides an evaluation of potential radioactive releases from a defective WP that may become airborne and result in contamination of the subsurface facility; and (6) Provides a preliminary analysis of the detectability of a potential WP leak to support the design of an airborne release monitoring system.
Subsurface Contamination Control
Y. Yuan
2001-12-12
subsurface repository; (2) provides a table of derived LRCL for nuclides of radiological importance; (3) Provides an as low as is reasonably achievable (ALARA) evaluation of the derived LRCL by comparing potential onsite and offsite doses to documented ALARA requirements; (4) Provides a method for estimating potential releases from a defective WP; (5) Provides an evaluation of potential radioactive releases from a defective WP that may become airborne and result in contamination of the subsurface facility; and (6) Provides a preliminary analysis of the detectability of a potential WP leak to support the design of an airborne release monitoring system.
Chatterjee, Nilanjan; Chen, Yi-Hau; Maas, Paige; Carroll, Raymond J
2016-03-01
Information from various public and private data sources of extremely large sample sizes are now increasingly available for research purposes. Statistical methods are needed for utilizing information from such big data sources while analyzing data from individual studies that may collect more detailed information required for addressing specific hypotheses of interest. In this article, we consider the problem of building regression models based on individual-level data from an "internal" study while utilizing summary-level information, such as information on parameters for reduced models, from an "external" big data source. We identify a set of very general constraints that link internal and external models. These constraints are used to develop a framework for semiparametric maximum likelihood inference that allows the distribution of covariates to be estimated using either the internal sample or an external reference sample. We develop extensions for handling complex stratified sampling designs, such as case-control sampling, for the internal study. Asymptotic theory and variance estimators are developed for each case. We use simulation studies and a real data application to assess the performance of the proposed methods in contrast to the generalized regression (GR) calibration methodology that is popular in the sample survey literature.
Blache, Yoann; Bobbert, Maarten; Argaud, Sebastien; Pairot de Fontenay, Benoit; Monteil, Karine M
2013-08-01
In experiments investigating vertical squat jumping, the HAT segment is typically defined as a line drawn from the hip to some point proximally on the upper body (eg, the neck, the acromion), and the hip joint as the angle between this line and the upper legs (θUL-HAT). In reality, the hip joint is the angle between the pelvis and the upper legs (θUL-pelvis). This study aimed to estimate to what extent hip joint definition affects hip joint work in maximal squat jumping. Moreover, the initial pelvic tilt was manipulated to maximize the difference in hip joint work as a function of hip joint definition. Twenty-two male athletes performed maximum effort squat jumps in three different initial pelvic tilt conditions: backward (pelvisB), neutral (pelvisN), and forward (pelvisF). Hip joint work was calculated by integrating the hip net joint torque with respect to θUL-HAT (WUL-HAT) or with respect to θUL-pelvis (WUL-pelvis). θUL-HAT was greater than θUL-pelvis in all conditions. WUL-HAT overestimated WULpelvis by 33%, 39%, and 49% in conditions pelvisF, pelvisN, and pelvisB, respectively. It was concluded that θUL-pelvis should be measured when the mechanical output of hip extensor muscles is estimated.
Deshpande, Paritosh C; Tilwankar, Atit K; Asolekar, Shyam R
2012-11-01
The 180 ship recycling yards located on Alang-Sosiya beach in the State of Gujarat on the west coast of India is the world's largest cluster engaged in dismantling. Yearly 350 ships have been dismantled (avg. 10,000 ton steel/ship) with the involvement of about 60,000 workers. Cutting and scrapping of plates or scraping of painted metal surfaces happens to be the commonly performed operation during ship breaking. The pollutants released from a typical plate-cutting operation can potentially either affect workers directly by contaminating the breathing zone (air pollution) or can potentially add pollution load into the intertidal zone and contaminate sediments when pollutants get emitted in the secondary working zone and gets subjected to tidal forces. There was a two-pronged purpose behind the mathematical modeling exercise performed in this study. First, to estimate the zone of influence up to which the effect of plume would extend. Second, to estimate the cumulative maximum concentration of heavy metals that can potentially occur in ambient atmosphere of a given yard. The cumulative maximum heavy metal concentration was predicted by the model to be between 113 μg/Nm(3) and 428 μg/Nm(3) (at 4m/s and 1m/s near-ground wind speeds, respectively). For example, centerline concentrations of lead (Pb) in the yard could be placed between 8 and 30 μg/Nm(3). These estimates are much higher than the Indian National Ambient Air Quality Standards (NAAQS) for Pb (0.5 μg/Nm(3)). This research has already become the critical science and technology inputs for formulation of policies for eco-friendly dismantling of ships, formulation of ideal procedure and corresponding health, safety, and environment provisions. The insights obtained from this research are also being used in developing appropriate technologies for minimizing exposure to workers and minimizing possibilities of causing heavy metal pollution in the intertidal zone of ship recycling yards in India.
Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier
2010-05-01
PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.
Habermehl M. A.
2012-04-01
Full Text Available Groundwater contains dissolved He, and its concentration increases with the residence time of the groundwater. Thus, if the 4He accumulation rate is constant, the dissolved 4He concentration in ground-water is equivalent to the residence time. Since accumulation mechanisms are not easily separated in the field, we estimate the total He accumulation rate during the half-life of 36Cl (3.01 × 105 years. We estimated the 4He accumulation rate, calibrated using both cosmogenic and subsurface-produced 36Cl, in the Great Artesian Basin (GAB, Australia, and the subsurface-produced 36Cl increase at the Äspö Hard Rock Laboratory, Sweden. 4He accumulation rates range from (1.9±0.3 × 10−11 to (15±6 × 10−11 ccSTP·cm−3·y−1 in GAB and (1.8 ±0.7 × 10−8 ccSTP·cm−3·y−1 at Äspö. We confirmed a ground-water flow with a residence time of 0.7-1.06 Ma in GAB and stagnant groundwater with the long residence time of 4.5 Ma at Äspö. Therefore, the groundwater residence time can be deduced from the dissolved 4He concentration and the 4He accumulation rate calibrated by 36Cl, provided that 4He accumulation, groundwater flow, and other geo-environmental conditions have remained unchanged for the required amount of geological time.
C. Lanni
2012-11-01
Full Text Available Topographic index-based hydrological models have gained wide use to describe the hydrological control on the triggering of rainfall-induced shallow landslides at the catchment scale. A common assumption in these models is that a spatially continuous water table occurs simultaneously across the catchment. However, during a rainfall event isolated patches of subsurface saturation form above an impeding layer and their hydrological connectivity is a necessary condition for lateral flow initiation at a point on the hillslope.
Here, a new hydrological model is presented, which allows us to account for the concept of hydrological connectivity while keeping the simplicity of the topographic index approach. A dynamic topographic index is used to describe the transient lateral flow that is established at a hillslope element when the rainfall amount exceeds a threshold value allowing for (a development of a perched water table above an impeding layer, and (b hydrological connectivity between the hillslope element and its own upslope contributing area. A spatially variable soil depth is the main control of hydrological connectivity in the model. The hydrological model is coupled with the infinite slope stability model and with a scaling model for the rainfall frequency–duration relationship to determine the return period of the critical rainfall needed to cause instability on three catchments located in the Italian Alps, where a survey of soil depth spatial distribution is available. The model is compared with a quasi-dynamic model in which the dynamic nature of the hydrological connectivity is neglected. The results show a better performance of the new model in predicting observed shallow landslides, implying that soil depth spatial variability and connectivity bear a significant control on shallow landsliding.
Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi
2016-01-01
Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.
DeVries, Zachary C; Kells, Stephen A; Appel, Arthur G
2016-07-01
Evaluating the critical thermal maximum (CTmax) in insects has provided a number of challenges. Visual observations of endpoints (onset of spasms, loss of righting response, etc.) can be difficult to measure consistently, especially with smaller insects. To resolve this problem, Lighton and Turner (2004) developed a new technique: thermolimit respirometry (TLR). TLR combines real time measurements of both metabolism (V·CO2) and activity to provide two independent, objective measures of CTmax. However, several questions still remain regarding the precision of TLR and how accurate it is in relation to traditional methods. Therefore, we evaluated CTmax of bed bugs using both traditional (visual) methods and TLR at three important metabolic periods following feeding (1d, 9d, and 21d). Both methods provided similar estimates of CTmax, although traditional methods produced consistently lower values (0.7-1°C lower than TLR). Despite similar levels of precision, TLR provided a more complete profile of thermal tolerance, describing changes in metabolism and activity leading up to the CTmax, not available through traditional methods. In addition, feeding status had a significant effect on bed bug CTmax, with bed bugs starved 9d (45.19[±0.20]°C) having the greatest thermal tolerance, followed by bed bugs starved 1d (44.64[±0.28]°C), and finally bed bugs starved 21d (44.12[±0.28]°C). Accuracy of traditional visual methods in relation to TLR is highly dependent on the selected endpoint; however, when performed correctly, both methods provide precise, accurate, and reliable estimations of CTmax.
Estimation of probable maximum typhoon wave for coastal nuclear power plant%滨海核电可能最大台风浪的推算
丁赟
2011-01-01
采用当前国际流行的第三代波浪模式SWAN探讨了滨海核电工程可能最大台风浪的计算,并分析了可能最大台风浪与相伴随的可能最大风暴潮成长规律.分析得可能最大台风浪通常滞后可能最大风暴潮增水峰值,推算得到的可能最大台风浪高于遮浪海洋站观测到的最大波高,为滨海核电工程可能最大台风浪的推算提供参考.%The third-generation wave model, SWAN (Simulating Waves Nearshore), was employed to estimate the probable maximum typhoon wave at a coastal engineering area. The relationship between the development of probable maximum typhoon wave and that of probable maximum storm surge was investigated. It is shown that the probable maximum typhoon wave usually occurs later than the probable maximum storm surge. The estimated probable maximum typhoon wave is higher than the historical observational maximum wave height data of Zhelang station. The approach utilized in this study to estimate probable maximum typhoon wave could provide valuable information in design of coastal engineering.
Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong
2016-06-16
Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain's response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.
Luo, Hong; Ma, You-xin; Liu, Wen-jun; Li, Hong-mei
2010-05-01
By using maximum upstream flow path, a self-developed new method for calculating slope length value based on Arc Macro Language (AML), five groups of DEM data for different regions in Bijie Prefecture of Guizhou Province were extracted to compute the slope length and topographical factors in the Prefecture. The time cost for calculating the slope length and the values of the topographical factors were analyzed, and compared with those by iterative slope length method based on AML (ISLA) and on C++ (ISLC). The results showed that the new method was feasible to calculate the slope length and topographical factors in revised universal soil loss model, and had the same effect as iterative slope length method. Comparing with ISLA, the new method had a high computing efficiency and greatly decreased the time consumption, and could be applied to a large area to estimate the slope length and topographical factors based on AML. Comparing with ISLC, the new method had the similar computing efficiency, but its coding was easily to be written, modified, and debugged by using AML. Therefore, the new method could be more broadly used by GIS users.
Weber, S. L.; Drury, A. J.; Toonen, W. H. J.; van Weele, M.
2010-03-01
It is an open question to what extent wetlands contributed to the interglacial-glacial decrease in atmospheric methane concentration. Here we estimate methane emissions from glacial wetlands, using newly available PMIP2 simulations of the Last Glacial Maximum (LGM) climate from coupled atmosphere-ocean and atmosphere-ocean-vegetation models. These simulations apply improved boundary conditions resulting in better agreement with paleoclimatic data than earlier PMIP1 simulations. Emissions are computed from the dominant controls of water table depth, soil temperature, and plant productivity, and we analyze the relative role of each factor in the glacial decline. It is found that latitudinal changes in soil moisture, in combination with ice sheet expansion, cause boreal wetlands to shift southward in all simulations. This southward migration is instrumental in maintaining the boreal wetland source at a significant level. The mean emission temperature over boreal wetlands drops by only a few degrees, despite the strong overall cooling. The temperature effect on the glacial decline in the methane flux is therefore moderate, while reduced plant productivity contributes equally to the total reduction. Model results indicate a relatively small boreal and large tropical source during the LGM, with wetlands on the exposed continental shelves mainly contributing to the tropical source. This distribution in emissions is consistent with the low interpolar difference in glacial methane concentrations derived from ice core data.
Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong
2016-01-01
Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267
Kyungsoo Kim
2016-06-01
Full Text Available Electroencephalograms (EEGs measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE schemes based on a joint maximum likelihood (ML criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.
Terrestrial Subsurface Ecosystem
Wilkins, Michael J.; Fredrickson, Jim K.
2015-10-15
The Earth’s crust is a solid cool layer that overlays the mantle, with a varying thickness of between 30-50 km on continental plates, and 5-10 km on oceanic plates. Continental crust is composed of a variety of igneous, metamorphic, and sedimentary rocks that weather and re-form over geologic cycles lasting millions to billions of years. At the crust surface, these weathered minerals and organic material combine to produce a variety of soils types that provide suitable habitats and niches for abundant microbial diversity (see Chapter 4). Beneath this soil zone is the subsurface. Once thought to be relatively free of microorganisms, recent estimates have calculated that between 1016-1017 g C biomass (2-19% of Earth’s total biomass) may be present in this environment (Whitman et al., 1998;McMahon and Parnell, 2014). Microbial life in the subsurface exists across a wide range of habitats: in pores associated with relatively shallow unconsolidated aquifer sediments to fractures in bedrock formations that are more than a kilometer deep, where extreme lithostatic pressures and temperatures are encountered. While these different environments contain varying physical and chemical conditions, the absence of light is a constant. Despite this, diverse physiologies and metabolisms enable microorganisms to harness energy and carbon for growth in water-filled pore spaces and fractures. Carbon and other element cycles are driven by microbial activity, which has implications for both natural processes and human activities in the subsurface, e.g., bacteria play key roles in both hydrocarbon formation and degradation. Hydrocarbons are a major focus for human utilization of the subsurface, via oil and gas extraction and potential geologic CO2 sequestration. The subsurface is also utilized or being considered for sequestered storage of high-level radioactive waste from nuclear power generation and residual waste from past production of weapons grade nuclear materials. While our
Henderson, Donald M; Nicholls, Robert
2015-08-01
Motivated by the work of palaeo-art "Double Death (2011)," a biomechanical analysis using three-dimensional digital models was conducted to assess the potential of a pair of the large, Late Cretaceous theropod dinosaur Carcharodontosaurus saharicus to successfully lift a medium-sized sauropod and not lose balance. Limaysaurus tessonei from the Late Cretaceous of South America was chosen as the sauropod as it is more completely known, but closely related to the rebbachisaurid sauropods found in the same deposits with C. saharicus. The body models incorporate the details of the low-density regions associated with lungs, systems of air sacs, and pneumatized axial skeletal regions. These details, along with the surface meshes of the models, were used to estimate the body masses and centers of mass of the two animals. It was found that a 6 t C. saharicus could successfully lift a mass of 2.5 t and not lose balance as the combined center of mass of the body and the load in the jaws would still be over the feet. However, the neck muscles were found to only be capable of producing enough force to hold up the head with an added mass of 424 kg held at the midpoint of the maxillary tooth row. The jaw adductor muscles were more powerful, and could have held a load of 512 kg. The more limiting neck constraint leads to the conclusion that two, adult C. saharicus could successfully lift a L. tessonei with a maximum body mass of 850 kg and a body length of 8.3 m.
Sivaguru, Mayandi; Kabir, Mohammad M.; Gartia, Manas Ranjan; Biggs, David S. C.; Sivaguru, Barghav S.; Sivaguru, Vignesh A.; Berent, Zachary T.; Wagoner Johnson, Amy J.; Fried, Glenn A.; Liu, Gang Logan; Sadayappan, Sakthivel; Toussaint, Kimani C.
2017-02-01
Second-harmonic generation (SHG) microscopy is a label-free imaging technique to study collagenous materials in extracellular matrix environment with high resolution and contrast. However, like many other microscopy techniques, the actual spatial resolution achievable by SHG microscopy is reduced by out-of-focus blur and optical aberrations that degrade particularly the amplitude of the detectable higher spatial frequencies. Being a two-photon scattering process, it is challenging to define a point spread function (PSF) for the SHG imaging modality. As a result, in comparison with other two-photon imaging systems like two-photon fluorescence, it is difficult to apply any PSF-engineering techniques to enhance the experimental spatial resolution closer to the diffraction limit. Here, we present a method to improve the spatial resolution in SHG microscopy using an advanced maximum likelihood estimation (AdvMLE) algorithm to recover the otherwise degraded higher spatial frequencies in an SHG image. Through adaptation and iteration, the AdvMLE algorithm calculates an improved PSF for an SHG image and enhances the spatial resolution by decreasing the full-width-at-halfmaximum (FWHM) by 20%. Similar results are consistently observed for biological tissues with varying SHG sources, such as gold nanoparticles and collagen in porcine feet tendons. By obtaining an experimental transverse spatial resolution of 400 nm, we show that the AdvMLE algorithm brings the practical spatial resolution closer to the theoretical diffraction limit. Our approach is suitable for adaptation in micro-nano CT and MRI imaging, which has the potential to impact diagnosis and treatment of human diseases.
Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M
2016-04-21
Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented.
Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M.
2016-04-01
Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented.
Wang Huai-Chun
2009-09-01
Full Text Available Abstract Background The covarion hypothesis of molecular evolution holds that selective pressures on a given amino acid or nucleotide site are dependent on the identity of other sites in the molecule that change throughout time, resulting in changes of evolutionary rates of sites along the branches of a phylogenetic tree. At the sequence level, covarion-like evolution at a site manifests as conservation of nucleotide or amino acid states among some homologs where the states are not conserved in other homologs (or groups of homologs. Covarion-like evolution has been shown to relate to changes in functions at sites in different clades, and, if ignored, can adversely affect the accuracy of phylogenetic inference. Results PROCOV (protein covarion analysis is a software tool that implements a number of previously proposed covarion models of protein evolution for phylogenetic inference in a maximum likelihood framework. Several algorithmic and implementation improvements in this tool over previous versions make computationally expensive tree searches with covarion models more efficient and analyses of large phylogenomic data sets tractable. PROCOV can be used to identify covarion sites by comparing the site likelihoods under the covarion process to the corresponding site likelihoods under a rates-across-sites (RAS process. Those sites with the greatest log-likelihood difference between a 'covarion' and an RAS process were found to be of functional or structural significance in a dataset of bacterial and eukaryotic elongation factors. Conclusion Covarion models implemented in PROCOV may be especially useful for phylogenetic estimation when ancient divergences between sequences have occurred and rates of evolution at sites are likely to have changed over the tree. It can also be used to study lineage-specific functional shifts in protein families that result in changes in the patterns of site variability among subtrees.
Xiaokang Kou
2016-01-01
Full Text Available Land surface temperature (LST plays a major role in the study of surface energy balances. Remote sensing techniques provide ways to monitor LST at large scales. However, due to atmospheric influences, significant missing data exist in LST products retrieved from satellite thermal infrared (TIR remotely sensed data. Although passive microwaves (PMWs are able to overcome these atmospheric influences while estimating LST, the data are constrained by low spatial resolution. In this study, to obtain complete and high-quality LST data, the Bayesian Maximum Entropy (BME method was introduced to merge 0.01° and 0.25° LSTs inversed from MODIS and AMSR-E data, respectively. The result showed that the missing LSTs in cloudy pixels were filled completely, and the availability of merged LSTs reaches 100%. Because the depths of LST and soil temperature measurements are different, before validating the merged LST, the station measurements were calibrated with an empirical equation between MODIS LST and 0~5 cm soil temperatures. The results showed that the accuracy of merged LSTs increased with the increasing quantity of utilized data, and as the availability of utilized data increased from 25.2% to 91.4%, the RMSEs of the merged data decreased from 4.53 °C to 2.31 °C. In addition, compared with the filling gap method in which MODIS LST gaps were filled with AMSR-E LST directly, the merged LSTs from the BME method showed better spatial continuity. The different penetration depths of TIR and PMWs may influence fusion performance and still require further studies.
Subsurface Facility System Description Document
Eric Loros
2001-07-31
The Subsurface Facility System encompasses the location, arrangement, size, and spacing of the underground openings. This subsurface system includes accesses, alcoves, and drifts. This system provides access to the underground, provides for the emplacement of waste packages, provides openings to allow safe and secure work conditions, and interfaces with the natural barrier. This system includes what is now the Exploratory Studies Facility. The Subsurface Facility System physical location and general arrangement help support the long-term waste isolation objectives of the repository. The Subsurface Facility System locates the repository openings away from main traces of major faults, away from exposure to erosion, above the probable maximum flood elevation, and above the water table. The general arrangement, size, and spacing of the emplacement drifts support disposal of the entire inventory of waste packages based on the emplacement strategy. The Subsurface Facility System provides access ramps to safely facilitate development and emplacement operations. The Subsurface Facility System supports the development and emplacement operations by providing subsurface space for such systems as ventilation, utilities, safety, monitoring, and transportation.
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Metin I Eren
Full Text Available BACKGROUND: Estimating assemblage species or class richness from samples remains a challenging, but essential, goal. Though a variety of statistical tools for estimating species or class richness have been developed, they are all singly-bounded: assuming only a lower bound of species or classes. Nevertheless there are numerous situations, particularly in the cultural realm, where the maximum number of classes is fixed. For this reason, a new method is needed to estimate richness when both upper and lower bounds are known. METHODOLOGY/PRINCIPAL FINDINGS: Here, we introduce a new method for estimating class richness: doubly-bounded confidence intervals (both lower and upper bounds are known. We specifically illustrate our new method using the Chao1 estimator, rarefaction, and extrapolation, although any estimator of asymptotic richness can be used in our method. Using a case study of Clovis stone tools from the North American Lower Great Lakes region, we demonstrate that singly-bounded richness estimators can yield confidence intervals with upper bound estimates larger than the possible maximum number of classes, while our new method provides estimates that make empirical sense. CONCLUSIONS/SIGNIFICANCE: Application of the new method for constructing doubly-bound richness estimates of Clovis stone tools permitted conclusions to be drawn that were not otherwise possible with singly-bounded richness estimates, namely, that Lower Great Lakes Clovis Paleoindians utilized a settlement pattern that was probably more logistical in nature than residential. However, our new method is not limited to archaeological applications. It can be applied to any set of data for which there is a fixed maximum number of classes, whether that be site occupancy models, commercial products (e.g. athletic shoes, or census information (e.g. nationality, religion, age, race.
M. N. Mishra
2004-01-01
Full Text Available This paper is concerned with the study of the rate of convergence of the distribution of the maximum likelihood estimator of a parameter appearing linearly in the drift coefficients of two types of stochastic partial differential equations (SPDEs.
Jones, Douglas H.
The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…
Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho
2017-03-01
So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.
Fast estimation from above of the maximum wave speed in the Riemann problem for the Euler equations
Guermond, Jean-Luc; Popov, Bojan
2016-09-01
This paper is concerned with the construction of a fast algorithm for computing the maximum speed of propagation in the Riemann solution for the Euler system of gas dynamics with the co-volume equation of state. The novelty in the algorithm is that it stops when a guaranteed upper bound for the maximum speed is reached with a prescribed accuracy. The convergence rate of the algorithm is cubic and the bound is guaranteed for gasses with the co-volume equation of state and the heat capacity ratio γ in the range (1 , 5 / 3 ].
Hyuk-Jae Roh Prasanta K. Sahu Ata M. Khan Satish Sharma
2015-01-01
...., where the model estimation is usually carried out by using commercial software. Nonetheless, tailored computer codes offer modellers greater flexibility and control of unique modelling situation...
Haberman, Shelby J.
2004-01-01
The usefulness of joint and conditional maximum-likelihood is considered for the Rasch model under realistic testing conditions in which the number of examinees is very large and the number is items is relatively large. Conditions for consistency and asymptotic normality are explored, effects of model error are investigated, measures of prediction…
Kukush, Alexander; Schneeweiss, Hans
2004-01-01
We compare the asymptotic covariance matrix of the ML estimator in a nonlinear measurement error model to the asymptotic covariance matrices of the CS and SQS estimators studied in Kukush et al (2002). For small measurement error variances they are equal up to the order of the measurement error variance and thus nearly equally efficient.
Maine, R. E.
1978-01-01
There are several practical problems in using current techniques with five degree of freedom equations to estimate the stability and control derivatives of oblique wing aircraft from flight data. A technique was developed to estimate these derivatives by separating the analysis of the longitudinal and lateral directional motion without neglecting cross coupling effects. Although previously applied to symmetrical aircraft, the technique was not expected to be adequate for oblique wing vehicles. The application of the technique to flight data from a remotely piloted oblique wing aircraft is described. The aircraft instrumentation and data processing were reviewed, with particular emphasis on the digital filtering of the data. A complete set of flight determined stability and control derivative estimates is presented and compared with predictions. The results demonstrated that the relatively simple approach developed was adequate to obtain high quality estimates of the aerodynamic derivatives of such aircraft.
Subsurface contaminants focus area
NONE
1996-08-01
The US Department of Enregy (DOE) Subsurface Contaminants Focus Area is developing technologies to address environmental problems associated with hazardous and radioactive contaminants in soil and groundwater that exist throughout the DOE complex, including radionuclides, heavy metals; and dense non-aqueous phase liquids (DNAPLs). More than 5,700 known DOE groundwater plumes have contaminated over 600 billion gallons of water and 200 million cubic meters of soil. Migration of these plumes threatens local and regional water sources, and in some cases has already adversely impacted off-site rsources. In addition, the Subsurface Contaminants Focus Area is responsible for supplying technologies for the remediation of numerous landfills at DOE facilities. These landfills are estimated to contain over 3 million cubic meters of radioactive and hazardous buried Technology developed within this specialty area will provide efective methods to contain contaminant plumes and new or alternative technologies for development of in situ technologies to minimize waste disposal costs and potential worker exposure by treating plumes in place. While addressing contaminant plumes emanating from DOE landfills, the Subsurface Contaminants Focus Area is also working to develop new or alternative technologies for the in situ stabilization, and nonintrusive characterization of these disposal sites.
Grove, R. D.; Bowles, R. L.; Mayhew, S. C.
1972-01-01
A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.
West, Anthony C. F.; Novakowski, Kent S.; Gazor, Saeed
2006-06-01
We propose a new method to estimate the transmissivities of bedrock fractures from transmissivities measured in intervals of fixed length along a borehole. We define the scale of a fracture set by the inverse of the density of the Poisson point process assumed to represent their locations along the borehole wall, and we assume a lognormal distribution for their transmissivities. The parameters of the latter distribution are estimated by maximizing the likelihood of a left-censored subset of the data where the degree of censorship depends on the scale of the considered fracture set. We applied the method to sets of interval transmissivities simulated by summing random fracture transmissivities drawn from a specified population. We found the estimated distributions compared well to the transmissivity distributions of similarly scaled subsets of the most transmissive fractures from among the specified population. Estimation accuracy was most sensitive to the variance in the transmissivities of the fracture population. Using the proposed method, we estimated the transmissivities of fractures at increasing scale from hydraulic test data collected at a fixed scale in Smithville, Ontario, Canada. This is an important advancement since the resultant curves of transmissivity parameters versus fracture set scale would only previously have been obtainable from hydraulic tests conducted with increasing test interval length and with degrading equipment precision. Finally, on the basis of the properties of the proposed method, we propose guidelines for the design of fixed interval length hydraulic testing programs that require minimal prior knowledge of the rock.
Cavaliere, Giuseppe; Nielsen, Morten Ørregaard; Taylor, Robert
We consider the problem of conducting estimation and inference on the parameters of univariate heteroskedastic fractionally integrated time series models. We first extend existing results in the literature, developed for conditional sum-of squares estimators in the context of parametric fractional...... time series models driven by conditionally homoskedastic shocks, to allow for conditional and unconditional heteroskedasticity both of a quite general and unknown form. Global consistency and asymptotic normality are shown to still obtain; however, the covariance matrix of the limiting distribution...... of the estimator now depends on nuisance parameters derived both from the weak dependence and heteroskedasticity present in the shocks. We then investigate classical methods of inference based on the Wald, likelihood ratio and Lagrange multiplier tests for linear hypotheses on either or both of the long and short...
Wheeler, Russell L.
2014-01-01
Computation of probabilistic earthquake hazard requires an estimate of Mmax: the moment magnitude of the largest earthquake that is thought to be possible within a specified geographic region. The region specified in this report is the Central and Eastern United States and adjacent Canada. Parts A and B of this report describe the construction of a global catalog of moderate to large earthquakes that occurred worldwide in tectonic analogs of the Central and Eastern United States. Examination of histograms of the magnitudes of these earthquakes allows estimation of Central and Eastern United States Mmax. The catalog and Mmax estimates derived from it are used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. Part A deals with prehistoric earthquakes, and this part deals with historical events.
Sung Woo Park; Byung Kwan Oh; Hyo Seon Park
2015-01-01
The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this...
Draxler, Clemens; Alexandrowicz, Rainer W
2015-12-01
This paper refers to the exponential family of probability distributions and the conditional maximum likelihood (CML) theory. It is concerned with the determination of the sample size for three groups of tests of linear hypotheses, known as the fundamental trinity of Wald, score, and likelihood ratio tests. The main practical purpose refers to the special case of tests of the class of Rasch models. The theoretical background is discussed and the formal framework for sample size calculations is provided, given a predetermined deviation from the model to be tested and the probabilities of the errors of the first and second kinds.
Zhang, Yafei; Zhang, Fangqing; Chen, Guanghua
1994-12-01
It is proposed in this paper that the minimum substrate temperature for diamond growth from hydrogen-hydrocarbon gas mixtures be determined by the packing arrangements of hydrocarbon fragments at the surface, and the maximum substrate temperature be limited by the diamond growth surface reconstruction, which can be prevented by saturating the surface dangling bonds with atomic hydrogen. Theoretical calculations have been done by a formula proposed by Dryburgh [J. Crystal Growth 130 (1993) 305], and the results show that diamond can be deposited at the substrate temperatures ranging from ≈ 400 to ≈ 1200°C by low pressure chemical vapor deposition. This is consistent with experimental observations.
Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed
2016-07-01
Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes.
Casas-Castillo, M. Carmen; Rodríguez-Solà, Raúl; Navarro, Xavier; Russo, Beniamino; Lastra, Antonio; González, Paula; Redaño, Angel
2016-11-01
The fractal behavior of extreme rainfall intensities registered between 1940 and 2012 by the Retiro Observatory of Madrid (Spain) has been examined, and a simple scaling regime ranging from 25 min to 3 days of duration has been identified. Thus, an intensity-duration-frequency (IDF) master equation of the location has been constructed in terms of the simple scaling formulation. The scaling behavior of probable maximum precipitation (PMP) for durations between 5 min and 24 h has also been verified. For the statistical estimation of the PMP, an envelope curve of the frequency factor (k m ) based on a total of 10,194 station-years of annual maximum rainfall from 258 stations in Spain has been developed. This curve could be useful to estimate suitable values of PMP at any point of the Iberian Peninsula from basic statistical parameters (mean and standard deviation) of its rainfall series.
Jorge Cuadrado Reyes
2011-05-01
Full Text Available Abstract This research developed a logarithms for calculating the maximum heart rate (max. HR for players in team sports in game situations. The sample was made of thirteen players (aged 24 ± 3 to a Division Two Handball team. HR was initially measured by Course Navette test. Later, twenty one training sessions were conducted in which HR and Rate of Perceived Exertion (RPE, were continuously monitored, in each task. A lineal regression analysis was done to help find a max. HR prediction equation from the max. HR of the three highest intensity sessions. Results from this equation correlate significantly with data obtained in the Course Navette test and with those obtained by other indirect methods. The conclusion of this research is that this equation provides a very useful and easy way to measure the max. HR in real game situations, avoiding non-specific analytical tests and, therefore laboratory testing.. Key words: workout control, functional evaluation, prediction equation.
Green, Cynthia L; Brownie, Cavell; Boos, Dennis D; Lu, Jye-Chyi; Krucoff, Mitchell W
2016-04-01
We propose a novel likelihood method for analyzing time-to-event data when multiple events and multiple missing data intervals are possible prior to the first observed event for a given subject. This research is motivated by data obtained from a heart monitor used to track the recovery process of subjects experiencing an acute myocardial infarction. The time to first recovery, T1, is defined as the time when the ST-segment deviation first falls below 50% of the previous peak level. Estimation of T1 is complicated by data gaps during monitoring and the possibility that subjects can experience more than one recovery. If gaps occur prior to the first observed event, T, the first observed recovery may not be the subject's first recovery. We propose a parametric gap likelihood function conditional on the gap locations to estimate T1 Standard failure time methods that do not fully utilize the data are compared to the gap likelihood method by analyzing data from an actual study and by simulation. The proposed gap likelihood method is shown to be more efficient and less biased than interval censoring and more efficient than right censoring if data gaps occur early in the monitoring process or are short in duration.
Várnai, Csilla; Burkoff, Nikolas S; Wild, David L
2013-12-10
Maximum Likelihood (ML) optimization schemes are widely used for parameter inference. They maximize the likelihood of some experimentally observed data, with respect to the model parameters iteratively, following the gradient of the logarithm of the likelihood. Here, we employ a ML inference scheme to infer a generalizable, physics-based coarse-grained protein model (which includes Go̅-like biasing terms to stabilize secondary structure elements in room-temperature simulations), using native conformations of a training set of proteins as the observed data. Contrastive divergence, a novel statistical machine learning technique, is used to efficiently approximate the direction of the gradient ascent, which enables the use of a large training set of proteins. Unlike previous work, the generalizability of the protein model allows the folding of peptides and a protein (protein G) which are not part of the training set. We compare the same force field with different van der Waals (vdW) potential forms: a hard cutoff model, and a Lennard-Jones (LJ) potential with vdW parameters inferred or adopted from the CHARMM or AMBER force fields. Simulations of peptides and protein G show that the LJ model with inferred parameters outperforms the hard cutoff potential, which is consistent with previous observations. Simulations using the LJ potential with inferred vdW parameters also outperforms the protein models with adopted vdW parameter values, demonstrating that model parameters generally cannot be used with force fields with different energy functions. The software is available at https://sites.google.com/site/crankite/.
Simons, Frederik J
2012-01-01
Topography and gravity are geophysical fields whose joint statistical structure derives from interface-loading processes modulated by the underlying mechanics of isostatic and flexural compensation in the shallow lithosphere. Under this dual statistical-mechanistic viewpoint an estimation problem can be formulated where the knowns are topography and gravity and the principal unknown the elastic flexural rigidity of the lithosphere. In the guise of an equivalent "effective elastic thickness", this important, geographically varying, structural parameter has been the subject of many interpretative studies, but precisely how well it is known or how best it can be found from the data, abundant nonetheless, has remained contentious and unresolved throughout the last few decades of dedicated study. The popular methods whereby admittance or coherence, both spectral measures of the relation between gravity and topography, are inverted for the flexural rigidity, have revealed themselves to have insufficient power to in...
De Kauwe, Martin G; Lin, Yan-Shih; Wright, Ian J; Medlyn, Belinda E; Crous, Kristine Y; Ellsworth, David S; Maire, Vincent; Prentice, I Colin; Atkin, Owen K; Rogers, Alistair; Niinemets, Ülo; Serbin, Shawn P; Meir, Patrick; Uddling, Johan; Togashi, Henrique F; Tarvainen, Lasse; Weerasinghe, Lasantha K; Evans, Bradley J; Ishida, F Yoko; Domingues, Tomas F
2016-05-01
Simulations of photosynthesis by terrestrial biosphere models typically need a specification of the maximum carboxylation rate (Vcmax ). Estimating this parameter using A-Ci curves (net photosynthesis, A, vs intercellular CO2 concentration, Ci ) is laborious, which limits availability of Vcmax data. However, many multispecies field datasets include net photosynthetic rate at saturating irradiance and at ambient atmospheric CO2 concentration (Asat ) measurements, from which Vcmax can be extracted using a 'one-point method'. We used a global dataset of A-Ci curves (564 species from 46 field sites, covering a range of plant functional types) to test the validity of an alternative approach to estimate Vcmax from Asat via this 'one-point method'. If leaf respiration during the day (Rday ) is known exactly, Vcmax can be estimated with an r(2) value of 0.98 and a root-mean-squared error (RMSE) of 8.19 μmol m(-2) s(-1) . However, Rday typically must be estimated. Estimating Rday as 1.5% of Vcmax, we found that Vcmax could be estimated with an r(2) of 0.95 and an RMSE of 17.1 μmol m(-2) s(-1) . The one-point method provides a robust means to expand current databases of field-measured Vcmax , giving new potential to improve vegetation models and quantify the environmental drivers of Vcmax variation.
Lopez, D.; Matsubara, Y.; Muraki, Y.; Sako, T.; Valdés-Galicia, J. F.
2016-03-01
We searched for solar neutrons using the data collected by six detectors from the International Network of Solar Neutron Telescopes and one Neutron Monitor between January 2010 and December 2014. We considered the peak time of the X-ray intensity of thirty five ≥ X1.0 class flares detected by GOES satellite as the most probable production time of solar neutrons. We prepared a light-curve of the solar neutron telescopes and the neutron monitor for each flare, spanning ± 3 h from the peak time of GOES. Based on these light curves, we performed a statistical analysis for each flare. Setting a significance level at greater than 3σ, we report that no statistically significant signals due to solar neutrons were found. Therefore, upper limits are determined by the background level and solar angle of these thirty five solar flares. Our calculation assumed a power-law neutron energy spectrum and an impulsive emission profile at the Sun. The estimated upper limits of the neutron emission are consistent within the order of magnitude of the successful detections of solar neutrons made in solar cycle 23.
Sher Khan Panhwar; LIU Qun; Fozia Khan; Pirzada J. A. Siddiqui
2012-01-01
Using surplus production model packages of ASPIC (a stock-production model incorporating covariates) and CEDA (Catch effort data analysis),we analyzed the catch and effort data of Sillago sihama fishery in Pakistan.ASPIC estimates the parameters of MSY(maximum sustainable yield),Fmsy (fishing mortality),q (catchability coefficient),K(carrying capacity or unexploited biomass) and Bl/K(maximum sustainable yield over initial biomass).The estimated non-bootstrapped value of MSY based on logistic was 598 t and that based on the Fox model was 415 t,which showed that the Fox model estimation was more conservative than that with the logistic model.The R2 with the logistic model (0.702) is larger than that with the Fox model (0.541),which indicates a better fit.The coefficient of variation (cv) of the estimated MSY was about 0.3,except for a larger value 88.87 and a smaller value of 0.173.In contrast to the ASPIC results,the R2 with the Fox model (0.651-0.692) was larger than that with the Schaefer model (0.435-0.567),indicating a better fit.The key parameters of CEDA are:MSY,K,q,and r (intrinsic growth),and the three error assumptions in using the models are normal,log normal and gamma.Parameter estimates from the Schaefer and Pella-Tomlinson models were similar.The MSY estimations from the above two models were 398 t,549 t and 398 t for normal,log-normal and gamma error distributions,respectively.The MSY estimates from the Fox model were 381 t,366 t and 366 t for the above three error assumptions,respectively.The Fox model estimates were smaller than those for the Schaefer and the Pella-Tomlinson models.In the light of the MSY estimations of 415 t from ASPIC for the Fox model and 381 t from CEDA for the Fox model,MSY for S.sihama is about 400 t.As the catch in 2003was 401 t,we would suggest the fishery should be kept at the current level.Production models used here depend on the assumption that CPUE(catch per unit effort) data used in the study can reliably quantify
de Oliveira, Liliam Fernandes; Menegaldo, Luciano Luporini
2010-10-19
EMG-driven models can be used to estimate muscle force in biomechanical systems. Collected and processed EMG readings are used as the input of a dynamic system, which is integrated numerically. This approach requires the definition of a reasonably large set of parameters. Some of these vary widely among subjects, and slight inaccuracies in such parameters can lead to large model output errors. One of these parameters is the maximum voluntary contraction force (F(om)). This paper proposes an approach to find F(om) by estimating muscle physiological cross-sectional area (PCSA) using ultrasound (US), which is multiplied by a realistic value of maximum muscle specific tension. Ultrasound is used to measure muscle thickness, which allows for the determination of muscle volume through regression equations. Soleus, gastrocnemius medialis and gastrocnemius lateralis PCSAs are estimated using published volume proportions among leg muscles, which also requires measurements of muscle fiber length and pennation angle by US. F(om) obtained by this approach and from data widely cited in the literature was used to comparatively test a Hill-type EMG-driven model of the ankle joint. The model uses 3 EMGs (Soleus, gastrocnemius medialis and gastrocnemius lateralis) as inputs with joint torque as the output. The EMG signals were obtained in a series of experiments carried out with 8 adult male subjects, who performed an isometric contraction protocol consisting of 10s step contractions at 20% and 60% of the maximum voluntary contraction level. Isometric torque was simultaneously collected using a dynamometer. A statistically significant reduction in the root mean square error was observed when US-obtained F(om) was used, as compared to F(om) from the literature.
Sabitha Gauni
2014-03-01
Full Text Available In the field of Wireless Communication, there is always a demand for reliability, improved range and speed. Many wireless networks such as OFDM, CDMA2000, WCDMA etc., provide a solution to this problem when incorporated with Multiple input- multiple output (MIMO technology. Due to the complexity in signal processing, MIMO is highly expensive in terms of area consumption. In this paper, a method of MIMO receiver design is proposed to reduce the area consumed by the processing elements involved in complex signal processing. In this paper, a solution for area reduction in the Multiple input multiple output(MIMO Maximum Likelihood Receiver(MLE using Sorted QR Decomposition and Unitary transformation method is analyzed. It provides unified approach and also reduces ISI and provides better performance at low cost. The receiver pre-processor architecture based on Minimum Mean Square Error (MMSE is compared while using Iterative SQRD and Unitary transformation method for vectoring. Unitary transformations are transformations of the matrices which maintain the Hermitian nature of the matrix, and the multiplication and addition relationship between the operators. This helps to reduce the computational complexity significantly. The dynamic range of all variables is tightly bound and the algorithm is well suited for fixed point arithmetic.
Subsurface Biogeochemistry of Actinides
Kersting, Annie B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Univ. Relations and Science Education; Zavarin, Mavrik [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Glenn T. Seaborg Inst.
2016-06-29
A major scientific challenge in environmental sciences is to identify the dominant processes controlling actinide transport in the environment. It is estimated that currently, over 2200 metric tons of plutonium (Pu) have been deposited in the subsurface worldwide, a number that increases yearly with additional spent nuclear fuel (Ewing et al., 2010). Plutonium has been shown to migrate on the scale of kilometers, giving way to a critical concern that the fundamental biogeochemical processes that control its behavior in the subsurface are not well understood (Kersting et al., 1999; Novikov et al., 2006; Santschi et al., 2002). Neptunium (Np) is less prevalent in the environment; however, it is predicted to be a significant long-term dose contributor in high-level nuclear waste. Our focus on Np chemistry in this Science Plan is intended to help formulate a better understanding of Pu redox transformations in the environment and clarify the differences between the two long-lived actinides. The research approach of our Science Plan combines (1) Fundamental Mechanistic Studies that identify and quantify biogeochemical processes that control actinide behavior in solution and on solids, (2) Field Integration Studies that investigate the transport characteristics of Pu and test our conceptual understanding of actinide transport, and (3) Actinide Research Capabilities that allow us to achieve the objectives of this Scientific Focus Area (SFA and provide new opportunities for advancing actinide environmental chemistry. These three Research Thrusts form the basis of our SFA Science Program (Figure 1).
无
2005-01-01
A Bayesian approach using Markov chain Monte Carlo algorithms has been developed to analyze Smith's discretized version of the discovery process model. It avoids the problems involved in the maximum likelihood method by effectively making use of the information from the prior distribution and that from the discovery sequence according to posterior probabilities. All statistical inferences about the parameters of the model and total resources can be quantified by drawing samples directly from the joint posterior distribution. In addition, statistical errors of the samples can be easily assessed and the convergence properties can be monitored during the sampling. Because the information contained in a discovery sequence is not enough to estimate all parameters, especially the number of fields, geologically justified prior information is crucial to the estimation. The Bayesian approach allows the analyst to specify his subjective estimates of the required parameters and his degree of uncertainty about the estimates in a clearly identified fashion throughout the analysis. As an example, this approach is applied to the same data of the North Sea on which Smith demonstrated his maximum likelihood method. For this case, the Bayesian approach has really improved the overly pessimistic results and downward bias of the maximum likelihood procedure.
Arbones, B.; Figueiras, F. G.; Varela, R.
2000-09-01
Spectral and non-spectral measurements of the maximum quantum yield of carbon fixation for natural phytoplankton assemblages were compared in order to evaluate their effect on bio-optical models of primary production. Field samples were collected from two different coastal regions of NW Spain in spring, summer and autumn and in a polar environment (Gerlache Strait, Antarctica) during the austral summer. Concurrent determinations were made of spectral phytoplankton absorption coefficient [ aph( λ)], white-light-limited slope of the photosynthesis-irradiance relationships ( αB), carbon uptake action spectra [ αB( λ)], broad-band maximum quantum yields ( φm), and spectral maximum quantum yields [ φm( λ)]. Carbon uptake action spectra roughly followed the shape of the corresponding phytoplankton absorption spectra but with a slight displacement in the blue-green region that could be attributed to imbalance between the two photosystems PS I and PS II. Results also confirmed previous observations of wavelength dependency of maximum quantum yield. The broad-band maximum quantum yield ( φm) calculated considering the measured spectral phytoplankton absorption coefficient and the spectrum of the light source of the incubators was not significantly different form the averaged spectral maximum quantum yield [ overlineφ max(λ) ] ( t-test for paired samples, P=0.34). These results suggest that maximum quantum yield can be estimated with enough accuracy from white-light P- E curves and measured phytoplankton absorption spectra. Primary production at light limiting regimes was compared using four different models with a varying degree of spectral complexity. No significant differences ( t-test for paired samples, P=0.91) were found between a spectral model based on the carbon uptake action spectra [ αB( λ) — model a] and a model which uses the broad-band φm and measured aph( λ) (model b). In addition, primary production derived from constructed action spectra [ ac
Kyle, H. Lee; Hucek, Richard R.; Groveman, Brian; Frey, Richard
1990-01-01
The archived Earth radiation budget (ERB) products produced from the Nimbus-7 ERB narrow field-of-view scanner are described. The principal products are broadband outgoing longwave radiation (4.5 to 50 microns), reflected solar radiation (0.2 to 4.8 microns), and the net radiation. Daily and monthly averages are presented on a fixed global equal area (500 sq km), grid for the period May 1979 to May 1980. Two independent algorithms are used to estimate the outgoing fluxes from the observed radiances. The algorithms are described and the results compared. The products are divided into three subsets: the Scene Radiance Tapes (SRT) contain the calibrated radiances; the Sorting into Angular Bins (SAB) tape contains the SAB produced shortwave, longwave, and net radiation products; and the Maximum Likelihood Cloud Estimation (MLCE) tapes contain the MLCE products. The tape formats are described in detail.
Borg, Søren; Persson, U.; Jess, T.;
2010-01-01
Hospital, Copenhagen, Denmark, during 1991 to 1993. The data were aggregated over calendar years; for each year, the number of relapses and the number of surgical operations were recorded. Our aim was to estimate Markov models for disease activity in CD and UC, in terms of relapse and remission...... data and has good face validity. The disease activity model is less suitable for UC due to its transient nature through the presence of curative surgery......, with a cycle length of 1 month. The purpose of these models was to enable evaluation of interventions that would shorten relapses or postpone future relapses. An exact maximum likelihood estimator was developed that disaggregates the yearly observations into monthly transition probabilities between remission...
Ustaszewski, Kamil; Kasch, Norbert; Siegburg, Melanie; Navabpour, Payman; Thieme, Manuel
2014-05-01
The southwestern part of Thuringia (central Germany) hosts large subsurface extents of Lower Carboniferous granitoids of the Mid-German Crystalline Rise, overlain by an up to several kilometer thick succession of Lower Permian to Mid-Triassic volcanic and sedimentary rocks. The granitic basement represents a conductivity-controlled ('hot dry rock') reservoir of high potential that could be targeted for economic exploitation as an enhanced geothermal system (EGS) in the future. As a preparatory measure, the federal states of Thuringia and Saxony have jointly funded a collaborative research and development project ('Optiriss') aimed at mitigating non-productivity risks during the exploration of such reservoirs. In order to provide structural constraints on the fracture network design during reservoir stimulation, we have carried out a geometric and kinematic analysis of pre-existing fracture patterns in exposures of the Carboniferous basement and Mesozoic cover rocks within an area of c. 500 km2 around the towns of Meiningen and Suhl, where granitic basement and sedimentary cover are juxtaposed along the southern border fault of the Thuringian Forest basement high. The frequency distribution of fractures was assessed by combining outcrop-scale fracture measurements in 31 exposures and photogrammetric analysis of fractures using a LIDAR DEM with 5 m horizontal resolution and rectified aerial images at 4 localities. This analysis revealed a prevalence of NW-SE-trending fractures of mainly joints, extension veins, Permian magmatic dikes and subordinately brittle faults in the Carboniferous granitic basement, which probably resulted from Permian tectonics. In order to assess the reactivation potential of fractures in the reservoir during a stimulation phase, constraints on the current strain regime and in-situ stress magnitudes, including borehole data and earthquake focal mechanisms in a larger area, were needed. These data reveal a presently NW-SE-trending maximum
A Maximum Likelihood Method for Harmonic Impedance Estimation%系统谐波阻抗估计的极大似然估计方法
华回春; 贾秀芳; 曹东升; 赵成勇
2014-01-01
In order to estimate the harmonic impedance more accurately, the complex maximum likelihood estimation method was proposed in the paper. Firstly, the complex multivariate Gaussian random variable was defined by imitating the real multivariate Gaussian random variable definition. According to the meaning of covariance, the calculation formula of the complex covariance was given. Secondly, the probability density function of the complex Gaussian distribution was deduced by the algebra isomorphism theory. Data selection was performed by the statistical theory, and then the complex maximum likelihood estimation function was established for the selected data. Finally, harmonic impedance was estimated by maximizing the complex maximum likelihood estimation function. A case study based on the IEEE 14-bus test system was operated, which shows that the proposed method can give more accurate result compared with the traditional methods.%为更加准确地估计系统谐波阻抗，提出复数域极大似然估计方法。首先，仿照实数域多元正态分布的定义给出复数域多元正态随机变量的定义，根据协方差的含义，定义复协方差的计算公式。然后利用代数同构理论，推导复数域正态分布的概率密度函数，利用统计学理论进行数据筛选，采用筛选后的数据代入复数域极大似然估计函数。最后利用极值理论进行求解，实现系统谐波阻抗的估计。对IEEE 14节点系统进行仿真，结果表明，与传统方法相比，所提方法估计结果更为准确。
Dang, Cuong Cao; Le, Vinh Sy; Gascuel, Olivier; Hazes, Bart; Le, Quang Si
2014-10-24
Amino acid replacement rate matrices are a crucial component of many protein analysis systems such as sequence similarity search, sequence alignment, and phylogenetic inference. Ideally, the rate matrix reflects the mutational behavior of the actual data under study; however, estimating amino acid replacement rate matrices requires large protein alignments and is computationally expensive and complex. As a compromise, sub-optimal pre-calculated generic matrices are typically used for protein-based phylogeny. Sequence availability has now grown to a point where problem-specific rate matrices can often be calculated if the computational cost can be controlled. The most time consuming step in estimating rate matrices by maximum likelihood is building maximum likelihood phylogenetic trees from protein alignments. We propose a new procedure, called FastMG, to overcome this obstacle. The key innovation is the alignment-splitting algorithm that splits alignments with many sequences into non-overlapping sub-alignments prior to estimating amino acid replacement rates. Experiments with different large data sets showed that the FastMG procedure was an order of magnitude faster than without splitting. Importantly, there was no apparent loss in matrix quality if an appropriate splitting procedure is used. FastMG is a simple, fast and accurate procedure to estimate amino acid replacement rate matrices from large data sets. It enables researchers to study the evolutionary relationships for specific groups of proteins or taxa with optimized, data-specific amino acid replacement rate matrices. The programs, data sets, and the new mammalian mitochondrial protein rate matrix are available at http://fastmg.codeplex.com.
Livingston, Richard A.; Jin, Shuang
2005-05-01
Bridges and other civil structures can exhibit nonlinear and/or chaotic behavior under ambient traffic or wind loadings. The probability density function (pdf) of the observed structural responses thus plays an important role for long-term structural health monitoring, LRFR and fatigue life analysis. However, the actual pdf of such structural response data often has a very complicated shape due to its fractal nature. Various conventional methods to approximate it can often lead to biased estimates. This paper presents recent research progress at the Turner-Fairbank Highway Research Center of the FHWA in applying a novel probabilistic scaling scheme for enhanced maximum entropy evaluation to find the most unbiased pdf. The maximum entropy method is applied with a fractal interpolation formulation based on contraction mappings through an iterated function system (IFS). Based on a fractal dimension determined from the entire response data set by an algorithm involving the information dimension, a characteristic uncertainty parameter, called the probabilistic scaling factor, can be introduced. This allows significantly enhanced maximum entropy evaluation through the added inferences about the fine scale fluctuations in the response data. Case studies using the dynamic response data sets collected from a real world bridge (Commodore Barry Bridge, PA) and from the simulation of a classical nonlinear chaotic system (the Lorenz system) are presented in this paper. The results illustrate the advantages of the probabilistic scaling method over conventional approaches for finding the unbiased pdf especially in the critical tail region that contains the larger structural responses.
Blackwood, R.L.
1980-05-15
There are now available sufficient data from in-situ, pre-mining stress measurements to allow a first attempt at predicting the maximum stress magnitudes likely to occur in a given mining context. The sub-horizontal (lateral) stress generally dominates the stress field, becoming critical to stope stability in many cases. For cut-and-fill mining in particular, where developed fill pressures are influenced by lateral displacement of pillars or stope backs, extraction maximization planning by mathematical modelling techniques demands the best available estimate of pre-mining stresses. While field measurements are still essential for this purpose, in the present paper it is suggested that the worst stress case can be predicted for preliminary design or feasibility study purposes. In the Eurpoean continent the vertical component of pre-mining stress may be estimated by adding 2 MPa to the pressure due to overburden weight. The maximum lateral stress likely to be encountered is about 57 MPa at depths of some 800m to 1000m below the surface.
Joint maximum likelihood and Bayesian channel estimation%联合最大似然贝叶斯信道估计
沈壁川; 郑建宏; 申敏
2008-01-01
在高信噪比情况下统计贝叶斯估计是一种有效的信道估计方法,但是在低信噪比时由于噪声估计不准确,其性能逐渐下降.研究了基于鲁棒的非线性降噪方法,提出了一个简化的联合最大似然贝叶斯信道估计.计算机仿真结果和分析表明这种方法在较高和较低的信噪比情况下,提高了信道估计和联合检测的性能.%Statistical Bayesian channel estimation is effective in suppressing noise floor for high SNR, but its performance degrades due to less reliable noise estimation in low SNR region. Based on a robust nonlinear de-noising technique for small signal, a simplified joint maximum likelihood and Bayesian channel estimation is proposed and investigated. Simulation results are presented and analysis shows it is promising to improve channel estimation and joint detection performance for both low and high SNR situations.
Phan Thanh Noi
2016-12-01
Full Text Available This study aims to evaluate quantitatively the land surface temperature (LST derived from MODIS (Moderate Resolution Imaging Spectroradiometer MOD11A1 and MYD11A1 Collection 5 products for daily land air surface temperature (Ta estimation over a mountainous region in northern Vietnam. The main objective is to estimate maximum and minimum Ta (Ta-max and Ta-min using both TERRA and AQUA MODIS LST products (daytime and nighttime and auxiliary data, solving the discontinuity problem of ground measurements. There exist no studies about Vietnam that have integrated both TERRA and AQUA LST of daytime and nighttime for Ta estimation (using four MODIS LST datasets. In addition, to find out which variables are the most effective to describe the differences between LST and Ta, we have tested several popular methods, such as: the Pearson correlation coefficient, stepwise, Bayesian information criterion (BIC, adjusted R-squared and the principal component analysis (PCA of 14 variables (including: LST products (four variables, NDVI, elevation, latitude, longitude, day length in hours, Julian day and four variables of the view zenith angle, and then, we applied nine models for Ta-max estimation and nine models for Ta-min estimation. The results showed that the differences between MODIS LST and ground truth temperature derived from 15 climate stations are time and regional topography dependent. The best results for Ta-max and Ta-min estimation were achieved when we combined both LST daytime and nighttime of TERRA and AQUA and data from the topography analysis.
基于最大似然估计的加权质心定位算法%Weighted Centroid Localization Algorithm Based on Maximum Likelihood Estimation
卢先领; 夏文瑞
2016-01-01
In solving the problem of localizing nodes in a wireless sensor network,we propose a weighted centroid localization algorithm based on maximum likelihood estimation,with the specific goal of solving the problems of big received signal strength indication (RSSI)ranging error and low accuracy of the centroid localization algorithm.Firstly,the maximum likelihood estimation between the estimated distance and the actual distance is calculated as weights.Then,a parameter k is introduced to optimize the weights between the anchor nodes and the unknown nodes in the weight model.Finally,the locations of the unknown nodes are calculated and modified by using the proposed algorithm.The simulation results show that the weighted centroid algorithm based on the maximum likelihood estimation has the features of high localization accuracy and low cost,and has better performance compared with the inverse distance-based algorithm and the inverse RSSI-based algo-rithm.Hence,the proposed algorithm is more suitable for the indoor localization of large areas.%为解决无线传感器网络中节点自身定位问题，针对接收信号强度指示（received signal strength indication，RSSI）测距误差大和质心定位算法精度低的问题，提出一种基于最大似然估计的加权质心定位算法。首先通过计算将估计距离与实际距离之间的最大似然估计值作为权值，然后在权值模型中，引进一个参数k优化未知节点周围锚节点分布，最后计算出未知节点的位置并加以修正。仿真结果表明，基于最大似然估计的加权质心算法具有定位精度高和成本低的特点，优于基于距离倒数的质心加权和基于RSSI倒数的质心加权算法，适用于大面积的室内定位。
Sasaki, Tomohiko; Kondo, Osamu
2016-09-01
Recent theoretical progress potentially refutes past claims that paleodemographic estimations are flawed by statistical problems, including age mimicry and sample bias due to differential preservation. The life expectancy at age 15 of the Jomon period prehistoric populace in Japan was initially estimated to have been ∼16 years while a more recent analysis suggested 31.5 years. In this study, we provide alternative results based on a new methodology. The material comprises 234 mandibular canines from Jomon period skeletal remains and a reference sample of 363 mandibular canines of recent-modern Japanese. Dental pulp reduction is used as the age-indicator, which because of tooth durability is presumed to minimize the effect of differential preservation. Maximum likelihood estimation, which theoretically avoids age mimicry, was applied. Our methods also adjusted for the known pulp volume reduction rate among recent-modern Japanese to provide a better fit for observations in the Jomon period sample. Without adjustment for the known rate in pulp volume reduction, estimates of Jomon life expectancy at age 15 were dubiously long. However, when the rate was adjusted, the estimate results in a value that falls within the range of modern hunter-gatherers, with significantly better fit to the observations. The rate-adjusted result of 32.2 years more likely represents the true life expectancy of the Jomon people at age 15, than the result without adjustment. Considering ∼7% rate of antemortem loss of the mandibular canine observed in our Jomon period sample, actual life expectancy at age 15 may have been as high as ∼35.3 years. © 2016 Wiley Periodicals, Inc.
Mendoza, G.; Flores, R. M.; Vega, E., E-mail: gozalo.mendoza@inin.gob.mx [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)
2016-09-15
For programs and activities to manage aging effects, any changes to plant operations, inspections, maintenance activities, systems and administrative control procedures during the renewal period should be characterized, designed to manage the effects of aging as required by 10 Cfr Part 54 that could impact the environment. Environmental impacts significantly different from those described in the final environmental statement for the current operating license should be described in detail. When complying with the requirements of a license renewal application, the Severe Accident Mitigation Alternatives (SAMA) analysis is contained in a supplement to the environmental report of the plant that meets the requirements of 10 Cfr Part 51. In this paper, the methodology for estimating the cost of severe accidents risk is established and discussed, which is then used to identify and select the alternatives for severe accident mitigation, which are analyzed to estimate the maximum benefit that an alternative could achieve if this eliminate all risk. Using the regulatory analysis techniques of the US Nuclear Regulatory Commission (NRC) estimates the cost of severe accidents risk. The ultimate goal of implementing the methodology is to identify candidates for SAMA that have the potential to reduce the severe accidents risk and determine if the implementation of each candidate is cost-effective. (Author)
Subsurface chlorophyll maxima in the north-western Bay of Bengal
Sarma, V.V.; Aswanikumar, V.
The depth profiles of phytoplankton pigments in the north-western Bay of Bengal are generally characterizEd. by a subsurface chlorophyll maximum. The occurrence of subsurface chlorophyll maxima is discussed in relation to other information on water...
Maximum Likelihood DOA Estimator based on Grid Hill Climbing Method%基于网格爬山法的最大似然DOA估计算法
艾名舜; 马红光
2011-01-01
The maximum likelihood estimator for direction of arrival ( DOA) possesses optimum theoretical performance as well as high computational complexity. Taking the estimation as an optimization problem of high-dimension nonlinear function, a novel algorithm has been proposed to reduce the computational load of that. At the beginning, the beamforming method is adopted to estimate the spatial spectrum roughly, and a group of initial solutions that obey the law of the "pre-estimated distribution " are obtained according to the information of the spatial spectrum, and the initial sulotions will locate in the local attractive area of the global optimum solution in great probability. Then, one of the soultions in this group who possesses the maximum fitness is selected to be the initial point of the local search. Grid Hill-climbing Method (GHCM) is a kinds of local search methods that takes a grid as a search unit, which is an improved version of the traditional Hill-climbing Method, and the GHCM is more efficient and stable than the traditional one, so it is a-dopted to obtain the global optimum solution at last. The proposed algorithm can obtain accurate DOA estimation with lower computational cost, and the simulation shows that the propoesd algorithm is more efficient than the maximum likelihood DOA estimator based on PSO .%最大似然波达方向(DOA)估计具有最优的理论性能,但是存在计算量过大的问题.为了降低最大似然DOA估计的计算量,将参数估计转化为高维非线性函数的优化问题,并提出了一种新的优化算法.首先利用波束形成法对空间谱进行预估计并根据空间谱信息构造一组满足“预估分布”的初始解,这组初始解以较大概率落在全局最优解的局部吸引域中.然后将其中适应度最大的一个初始解作为局部搜索的起点.网格爬山法是一种以网格为单元的局部搜索方法,比传统爬山法更加高效和稳定,因此采用该方法获取全局
Koyama, Shinsuke; Paninski, Liam
2010-08-01
A number of important data analysis problems in neuroscience can be solved using state-space models. In this article, we describe fast methods for computing the exact maximum a posteriori (MAP) path of the hidden state variable in these models, given spike train observations. If the state transition density is log-concave and the observation model satisfies certain standard assumptions, then the optimization problem is strictly concave and can be solved rapidly with Newton-Raphson methods, because the Hessian of the loglikelihood is block tridiagonal. We can further exploit this block-tridiagonal structure to develop efficient parameter estimation methods for these models. We describe applications of this approach to neural decoding problems, with a focus on the classic integrate-and-fire model as a key example.
Wied Pedersen, Jonas; Lund, Nadia Schou Vorndran; Borup, Morten;
2016-01-01
period of time that precedes the forecast. The method is illustrated for an urban catchment, where flow forecasts of 0–4 h are generated by applying a lumped linear reservoir model with three cascading reservoirs. Radar rainfall observations are used as input to the model. The effects of different prior......High quality on-line flow forecasts are useful for real-time operation of urban drainage systems and wastewater treatment plants. This requires computationally efficient models, which are continuously updated with observed data to provide good initial conditions for the forecasts. This paper...... presents a way of updating conceptual rainfall-runoff models using Maximum a Posteriori estimation to determine the most likely parameter constellation at the current point in time. This is done by combining information from prior parameter distributions and the model goodness of fit over a predefined...
Leclercq, C; Arcella, D; Turrini, A
2000-12-01
The three recent EU directives which fixed maximum permitted levels (MPL) for food additives for all member states also include the general obligation to establish national systems for monitoring the intake of these substances in order to evaluate their use safety. In this work, we considered additives with primary antioxidant technological function for which an acceptable daily intake (ADI) was established by the Scientific Committee for Food (SCF): gallates, butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) and erythorbic acid. The potential intake of these additives in Italy was estimated by means of a hierarchical approach using, step by step, more refined methods. The likelihood of the current ADI to be exceeded was very low for erythorbic acid, BHA and gallates. On the other hand, the theoretical maximum daily intake (TMDI) of BHT was above the current ADI. The three food categories found to be main potential sources of BHT were "pastry, cake and biscuits", "chewing gums" and "vegetables oils and margarine"; they overall contributed 74% of the TMDI. Actual use of BHT in these food categories is discussed, together with other aspects such as losses of this substance in the technological process and percentage of ingestion in the case of chewing gums.
Shaw, A; Takács, I; Pagilla, K R; Murthy, S
2013-10-15
The Monod equation is often used to describe biological treatment processes and is the foundation for many activated sludge models. The Monod equation includes a "half-saturation coefficient" to describe the effect of substrate limitations on the process rate and it is customary to consider this parameter to be a constant for a given system. The purpose of this study was to develop a methodology, and its use to show that the half-saturation coefficient for denitrification is not constant but is in fact a function of the maximum denitrification rate. A 4-step procedure is developed to investigate the dependency of half-saturation coefficients on the maximum rate and two different models are used to describe this dependency: (a) an empirical linear model and (b) a deterministic model based on Fick's law of diffusion. Both models are proved better for describing denitrification kinetics than assuming a fixed K(NO3) at low nitrate concentrations. The empirical model is more utilitarian whereas the model based on Fick's law has a fundamental basis that enables the intrinsic K(NO3) to be estimated. In this study data was analyzed from 56 denitrification rate tests and it was found that the extant K(NO3) varied between 0.07 mgN/L and 1.47 mgN/L (5th and 95th percentile respectively) with an average of 0.47 mgN/L. In contrast to this, the intrinsic K(NO3) estimated for the diffusion model was 0.01 mgN/L which indicates that the extant K(NO3) is greatly influenced by, and mostly describes, diffusion limitations.
Li, Ruochen; Englehardt, James D; Li, Xiaoguang
2012-02-01
Multivariate probability distributions, such as may be used for mixture dose-response assessment, are typically highly parameterized and difficult to fit to available data. However, such distributions may be useful in analyzing the large electronic data sets becoming available, such as dose-response biomarker and genetic information. In this article, a new two-stage computational approach is introduced for estimating multivariate distributions and addressing parameter uncertainty. The proposed first stage comprises a gradient Markov chain Monte Carlo (GMCMC) technique to find Bayesian posterior mode estimates (PMEs) of parameters, equivalent to maximum likelihood estimates (MLEs) in the absence of subjective information. In the second stage, these estimates are used to initialize a Markov chain Monte Carlo (MCMC) simulation, replacing the conventional burn-in period to allow convergent simulation of the full joint Bayesian posterior distribution and the corresponding unconditional multivariate distribution (not conditional on uncertain parameter values). When the distribution of parameter uncertainty is such a Bayesian posterior, the unconditional distribution is termed predictive. The method is demonstrated by finding conditional and unconditional versions of the recently proposed emergent dose-response function (DRF). Results are shown for the five-parameter common-mode and seven-parameter dissimilar-mode models, based on published data for eight benzene-toluene dose pairs. The common mode conditional DRF is obtained with a 21-fold reduction in data requirement versus MCMC. Example common-mode unconditional DRFs are then found using synthetic data, showing a 71% reduction in required data. The approach is further demonstrated for a PCB 126-PCB 153 mixture. Applicability is analyzed and discussed. Matlab(®) computer programs are provided.
Schnitzer, Mireille E; Moodie, Erica E M; van der Laan, Mark J; Platt, Robert W; Klein, Marina B
2014-03-01
Despite modern effective HIV treatment, hepatitis C virus (HCV) co-infection is associated with a high risk of progression to end-stage liver disease (ESLD) which has emerged as the primary cause of death in this population. Clinical interest lies in determining the impact of clearance of HCV on risk for ESLD. In this case study, we examine whether HCV clearance affects risk of ESLD using data from the multicenter Canadian Co-infection Cohort Study. Complications in this survival analysis arise from the time-dependent nature of the data, the presence of baseline confounders, loss to follow-up, and confounders that change over time, all of which can obscure the causal effect of interest. Additional challenges included non-censoring variable missingness and event sparsity. In order to efficiently estimate the ESLD-free survival probabilities under a specific history of HCV clearance, we demonstrate the double-robust and semiparametric efficient method of Targeted Maximum Likelihood Estimation (TMLE). Marginal structural models (MSM) can be used to model the effect of viral clearance (expressed as a hazard ratio) on ESLD-free survival and we demonstrate a way to estimate the parameters of a logistic model for the hazard function with TMLE. We show the theoretical derivation of the efficient influence curves for the parameters of two different MSMs and how they can be used to produce variance approximations for parameter estimates. Finally, the data analysis evaluating the impact of HCV on ESLD was undertaken using multiple imputations to account for the non-monotone missing data.
Su, Yu-min; Makinia, Jacek; Pagilla, Krishna R
2008-04-01
The autotrophic maximum specific growth rate constant, muA,max, is the critical parameter for design and performance of nitrifying activated sludge systems. In literature reviews (i.e., Henze et al., 1987; Metcalf and Eddy, 1991), a wide range of muA,max values have been reported (0.25 to 3.0 days(-1)); however, recent data from several wastewater treatment plants across North America revealed that the estimated muA,max values remained in the narrow range 0.85 to 1.05 days(-1). In this study, long-term operation of a laboratory-scale sequencing batch reactor system was investigated for estimating this coefficient according to the low food-to-microorganism ratio bioassay and simulation methods, as recommended in the Water Environment Research Foundation (Alexandria, Virginia) report (Melcer et al., 2003). The estimated muA,max values using steady-state model calculations for four operating periods ranged from 0.83 to 0.99 day(-1). The International Water Association (London, United Kingdom) Activated Sludge Model No. 1 (ASM1) dynamic model simulations revealed that a single value of muA,max (1.2 days(-1)) could be used, despite variations in the measured specific nitrification rates. However, the average muA,max was gradually decreasing during the activated sludge chlorination tests, until it reached the value of 0.48 day(-1) at the dose of 5 mg chlorine/(g mixed liquor suspended solids x d). Significant discrepancies between the predicted XA/YA ratios were observed. In some cases, the ASM1 predictions were approximately two times higher than the steady-state model predictions. This implies that estimating this ratio from a complex activated sludge model and using it in simple steady-state model calculations should be accepted with great caution and requires further investigation.
Curtis, Tyler E; Roeder, Ryan K
2017-07-06
Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in
Karlsson, J S; Ostlund, N; Larsson, B; Gerdle, B
2003-10-01
Frequency analysis of myoelectric (ME) signals, using the mean power spectral frequency (MNF), has been widely used to characterize peripheral muscle fatigue during isometric contractions assuming constant force. However, during repetitive isokinetic contractions performed with maximum effort, output (force or torque) will decrease markedly during the initial 40-60 contractions, followed by a phase with little or no change. MNF shows a similar pattern. In situations where there exist a significant relationship between MNF and output, part of the decrease in MNF may per se be related to the decrease in force during dynamic contractions. This study estimated force effects on the MNF shifts during repetitive dynamic knee extensions. Twenty healthy volunteers participated in the study and both surface ME signals (from the right vastus lateralis, vastus medialis, and rectus femoris muscles) and the biomechanical signals (force, position, and velocity) of an isokinetic dynamometer were measured. Two tests were performed: (i) 100 repetitive maximum isokinetic contractions of the right knee extensors, and (ii) five gradually increasing static knee extensions before and after (i). The corresponding ME signal time-frequency representations were calculated using the continuous wavelet transform. Compensation of the MNF variables of the repetitive contractions was performed with respect to the individual MNF-force relation based on an average of five gradually increasing contractions. Whether or not compensation was necessary was based on the shape of the MNF-force relationship. A significant compensation of the MNF was found for the repetitive isokinetic contractions. In conclusion, when investigating maximum dynamic contractions, decreases in MNF can be due to mechanisms similar to those found during sustained static contractions (force-independent component of fatigue) and in some subjects due to a direct effect of the change in force (force-dependent component of fatigue
冯三营; 薛留根
2012-01-01
考虑非参数协变量带有测量误差(EV)的非线性半参数模型,在测量误差分布为普通光滑分布时,利用经验似然方法,给出了回归系数,光滑函数以及误差方差的最大经验似然估计.在一定条件下证明了所得估计量的渐近正态性和相合性.最后通过数值模拟研究了所提估计方法在有限样本下的实际表现.%In this paper, we consider the nonlinear semiparametric models with measurement error in the nonparametric part. When the error is ordinarily smooth, we obtain the maximum empirical likelihood estimators of regression coefficient, smooth function and error variance by using the empirical likelihood method. The asymptotic normality and consistency of the proposed estimators are proved under some appropriate conditions. Finite sample performance of the proposed method is illustrated in a simulation study.
Inaniwa, Taku; Kohno, Toshiyuki; Tomitani, Takehiro
2005-12-21
In radiation therapy with hadron beams, conformal irradiation to a tumour can be achieved by using the properties of incident ions such as the high dose concentration around the Bragg peak. For the effective utilization of such properties, it is necessary to evaluate the volume irradiated with hadron beams and the deposited dose distribution in a patient's body. Several methods have been proposed for this purpose, one of which uses the positron emitters generated through fragmentation reactions between incident ions and target nuclei. In the previous paper, we showed that the maximum likelihood estimation (MLE) method could be applicable to the estimation of beam end-point from the measured positron emitting activity distribution for mono-energetic beam irradiations. In a practical treatment, a spread-out Bragg peak (SOBP) beam is used to achieve a uniform biological dose distribution in the whole target volume. Therefore, in the present paper, we proposed to extend the MLE method to estimations of the position of the distal and proximal edges of the SOBP from the detected annihilation gamma ray distribution. We confirmed the effectiveness of the method by means of simulations. Although polyethylene was adopted as a substitute for a soft tissue target in validating the method, the proposed method is equally applicable to general cases, provided that the reaction cross sections between the incident ions and the target nuclei are known. The relative advantage of incident beam species to determine the position of the distal and the proximal edges was compared. Furthermore, we ascertained the validity of applying the MLE method to determinations of the position of the distal and the proximal edges of an SOBP by simulations and we gave a physical explanation of the distal and the proximal information.
Eggers, G. L.; Lewis, K. W.; Simons, F. J.
2012-12-01
Venus has undergone a markedly different evolution than Earth. Its tectonics do not resemble the plate-tectonic system observed on Earth, and many surface features—such as tesserae and coronae—lack terrestrial equivalents. To understand Venus' tectonics is to understand its lithosphere. Lithospheric parameters such as the effective elastic thickness have previously been estimated from the correlation between topography and gravity anomalies, either in the space domain or the spectral domain (where admittance or coherence functions are estimated). Correlation and spectral analyses that have been obtained on Venus have been limited by geometry (typically, only rectangular or circular data windows were used), and most have lacked robust error estimates. There are two levels of error: the first being how well the correlation, admittance or coherence can be estimated; the second and most important, how well the lithospheric elastic thickness can be estimated from those. The first type of error is well understood, via classical analyses of resolution, bias and variance in multivariate spectral analysis. Understanding this error leads to constructive approaches of performing the spectral analysis, via multi-taper methods (which reduce variance) with well-chosen optimal tapers (to reduce bias). The second type of error requires a complete analysis of the coupled system of differential equations that describes how certain inputs (the unobservable initial loading by topography at various interfaces) are being mapped to the output (final, measurable topography and gravity anomalies). The equations of flexure have one unknown: the flexural rigidity or effective elastic thickness—the parameter of interest. Fortunately, we have recently come to a full understanding of this second type of error, and derived a maximum-likelihood estimation (MLE) method that results in unbiased and minimum-variance estimates of the flexural rigidity under a variety of initial
Kim, Bong-Guk; Cho, Yang-Ki; Kim, Bong-Gwan; Kim, Young-Gi; Jung, Ji-Hoon
2015-04-01
Subsurface temperature plays an important role in determining heat contents in the upper ocean which are crucial in long-term and short-term weather systems. Furthermore, subsurface temperature affects significantly ocean ecology. In this study, a simple and practical algorithm has proposed. If we assume that subsurface temperature changes are proportional to surface heating or cooling, subsurface temperature at each depth (Sub_temp) can be estimated as follows PIC whereiis depth index, Clm_temp is temperature from climatology, dif0 is temperature difference between satellite and climatology in the surface, and ratio is ratio of temperature variability in each depth to surface temperature variability. Subsurface temperatures using this algorithm from climatology (WOA2013) and satellite SST (OSTIA) where calculated in the sea around Korean peninsula. Validation result with in-situ observation data show good agreement in the upper 50 m layer with RMSE (root mean square error) less than 2 K. The RMSE is smallest with less than 1 K in winter when surface mixed layer is thick, and largest with about 2~3 K in summer when surface mixed layer is shallow. The strong thermocline and large variability of the mixed layer depth might result in large RMSE in summer. Applying of mixed layer depth information for the algorithm may improve subsurface temperature estimation in summer. Spatial-temporal details on the improvement and its causes will be discussed.
Warren, M. A.; Quartly, G. D.; Shutler, J. D.; Miller, P. I.; Yoshikawa, Y.
2016-09-01
Attempts to automatically estimate surface current velocities from satellite-derived thermal or visible imagery face the limitations of data occlusion due to cloud cover, the complex evolution of features and the degradation of their surface signature. The Geostationary Ocean Color Imager (GOCI) provides a chance to reappraise such techniques due to its multiyear record of hourly high-resolution visible spectrum data. Here we present the results of applying a Maximum Cross Correlation (MCC) technique to GOCI data. Using a combination of simulated and real data we derive suitable processing parameters and examine the robustness of different satellite products, those being water-leaving radiance and chlorophyll concentration. These estimates of surface currents are evaluated using High Frequency (HF) radar systems located in the Tsushima (Korea) Strait. We show the performance of the MCC approach varies depending on the amount of missing data and the presence of strong optical contrasts. Using simulated data it was found that patchy cloud cover occupying 25% of the image pair reduces the number of vectors by 20% compared to using perfect images. Root mean square errors between the MCC and HF radar velocities are of the order of 20 cm s-1. Performance varies depending on the wavelength of the data with the blue-green products out-performing the red and near infra-red products. Application of MCC to GOCI chlorophyll data results in similar performance to radiances in the blue-green bands. The technique has been demonstrated using specific examples of an eddy feature and tidal induced features in the region.
Maximum likelihood channel estimation based on nonlinear filter%基于非线性滤波器的最大似然信道估计
沈壁川; 郑建宏; 申敏
2008-01-01
For long finite channel impulse response,accurate maximum likelihood channel estimation is computationally high cost due to high dimension of parameter space,and approximate approaches are usually adopted.By utilizing the suppression of noise and extraction of signal of the nonlinear Teager-Kaiser filter,a likelihood ratio of channel estimation is defined to represent the probability distribution of ehannel parameters.Maximization of this likelihood funetion 1eads to initially searching the extrema of path delays and then the complex attenuation.Computer simulation iS conducted and the results show periormance improvements of ioint detection as compared to the non-likelihood approach.%在有限信道冲激响应较长的情况,由于待估计参数空间的高维数,准确计算最大似然信道估计的复杂度较高,在实际应用中通常采用近似的方法.利用非线性Teager-Kaiser滤波器在抑制噪声的同时可以有效提取信号的特征,定义了一个表征信道参数概率分布的似然比,对该似然函数的最大化是首先得到路径延迟的极值,然后求得复路径衰耗.计算机仿真结果表明,与非似然方法相比,采用该似然函数方法能使联合检测性能得到提高.
Jonas W. Pedersen
2016-09-01
Full Text Available High quality on-line flow forecasts are useful for real-time operation of urban drainage systems and wastewater treatment plants. This requires computationally efficient models, which are continuously updated with observed data to provide good initial conditions for the forecasts. This paper presents a way of updating conceptual rainfall-runoff models using Maximum a Posteriori estimation to determine the most likely parameter constellation at the current point in time. This is done by combining information from prior parameter distributions and the model goodness of fit over a predefined period of time that precedes the forecast. The method is illustrated for an urban catchment, where flow forecasts of 0–4 h are generated by applying a lumped linear reservoir model with three cascading reservoirs. Radar rainfall observations are used as input to the model. The effects of different prior standard deviations and lengths of the auto-calibration period on the resulting flow forecast performance are evaluated. We were able to demonstrate that, if properly tuned, the method leads to a significant increase in forecasting performance compared to a model without continuous auto-calibration. Delayed responses and erratic behaviour in the parameter variations are, however, observed and the choice of prior distributions and length of auto-calibration period is not straightforward.
基于极大似然估计的TMT三镜轴系装调%TMT third-mirror shafting system alignment based on maximum likelihood estimation
安其昌; 张景旭; 孙敬伟
2013-01-01
In order to complete the testing and alignment of TMT third mirror shafting, the maximum likelihood estimation was introduced. Firstly, two intersecting planes were used to identify a space line. Then, considering the noise of the measured data, maximum likelihood estimation was made use of to estimate TMT third mirror shafting parameters. And in MATLAB, which produced a training set with Gaussian white noise, the angle of collection axis and ideal axis from 6.29" to the optimized 5.24" was reduced, with optimization of 17%. Lastly, Vantage Laser Tracker was made the testing tool for TMT large shafting. Using optimization before, the TMT third mirror shafting residuals error was drawn to 2.9", which was less than the TMT indicator of 4". This paper will do good to TMT third mirror shafting alignment, and raise a real-time method to other large diameter optical system shafting alignment.%为了完成TMT三镜轴系的检测与装调，引入了极大似然估计来完成TMT三镜轴系装调。首先提出利用两过定点的相交拟合平面辨识一条空间直线；之后考虑到测量数据噪声类型的不确定性，提出使用极大似然估计对三镜机械轴位置参数进行辨识，并在MATLAB产生的一组带有高斯白噪声的训练集上对两个拟合平面所过定点位置进行优化，拟合轴线与理想轴线的夹角由优化前的6.29"降低为优化后的5.24"，优化量为17%；然后选定Vantage激光跟踪仪作为TMT大型轴系的检验工具，利用之前的优化方案，得出在该方法下TMT三镜轴系的定位残差为2.9"，小于TMT招标方提出的指标4"。文中将极大似然线性拟合用于TMT三镜轴系装调，提出了一种实时性强、适用范围广的方法，对于其他大口径光学系统轴系的检测与调节也有很大的借鉴意义。
Ceramic subsurface marker prototypes
Lukens, C.E. [Rockwell International Corp., Richland, WA (United States). Rockwell Hanford Operations
1985-05-02
The client submitted 5 sets of porcelain and stoneware subsurface (radioactive site) marker prototypes (31 markers each set). The following were determined: compressive strength, thermal shock resistance, thermal crazing resistance, alkali resistance, color retention, and chemical resistance.
Sullivan, Terry [Brookhaven National Lab. (BNL), Upton, NY (United States)
2016-02-22
The objectives of this report are; To present a simplified conceptual model for release from the buildings with residual subsurface structures that can be used to provide an upper bound on contaminant concentrations in the fill material; Provide maximum water concentrations and the corresponding amount of mass sorbed to the solid fill material that could occur in each building for use in dose assessment calculations; Estimate the maximum concentration in a well located outside of the fill material; and Perform a sensitivity analysis of key parameters.
Best Practice -- Subsurface Investigations
Clark Scott
2010-03-01
These best practices for Subsurface Survey processes were developed at the Idaho National Laboratory (INL) and later shared and formalized by a sub-committee, under the Electrical Safety Committee of EFCOG. The developed best practice is best characterized as a Tier II (enhanced) survey process for subsurface investigations. A result of this process has been an increase in the safety and lowering of overall cost, when utility hits and their related costs are factored in. The process involves improving the methodology and thoroughness of the survey and reporting processes; or improvement in tool use rather than in the tools themselves. It is hoped that the process described here can be implemented at other sites seeking to improve their Subsurface Investigation results with little upheaval to their existing system.
The Serpentinite Subsurface Microbiome
Schrenk, M. O.; Nelson, B. Y.; Brazelton, W. J.
2011-12-01
Microbial habitats hosted in ultramafic rocks constitute substantial, globally-distributed portions of the subsurface biosphere, occurring both on the continents and beneath the seafloor. The aqueous alteration of ultramafics, in a process known as serpentinization, creates energy rich, high pH conditions, with low concentrations of inorganic carbon which place fundamental constraints upon microbial metabolism and physiology. Despite their importance, very few studies have attempted to directly access and quantify microbial activities and distributions in the serpentinite subsurface microbiome. We have initiated microbiological studies of subsurface seeps and rocks at three separate continental sites of serpentinization in Newfoundland, Italy, and California and compared these results to previous analyses of the Lost City field, near the Mid-Atlantic Ridge. In all cases, microbial cell densities in seep fluids are extremely low, ranging from approximately 100,000 to less than 1,000 cells per milliliter. Culture-independent analyses of 16S rRNA genes revealed low-diversity microbial communities related to Gram-positive Firmicutes and hydrogen-oxidizing bacteria. Interestingly, unlike Lost City, there has been little evidence for significant archaeal populations in the continental subsurface to date. Culturing studies at the sites yielded numerous alkaliphilic isolates on nutrient-rich agar and putative iron-reducing bacteria in anaerobic incubations, many of which are related to known alkaliphilic and subsurface isolates. Finally, metagenomic data reinforce the culturing results, indicating the presence of genes associated with organotrophy, hydrogen oxidation, and iron reduction in seep fluid samples. Our data provide insight into the lifestyles of serpentinite subsurface microbial populations and targets for future quantitative exploration using both biochemical and geochemical approaches.
Bopp, L; Resplandy, L; Untersee, A; Le Mezo, P; Kageyama, M
2017-09-13
All Earth System models project a consistent decrease in the oxygen content of oceans for the coming decades because of ocean warming, reduced ventilation and increased stratification. But large uncertainties for these future projections of ocean deoxygenation remain for the subsurface tropical oceans where the major oxygen minimum zones are located. Here, we combine global warming projections, model-based estimates of natural short-term variability, as well as data and model estimates of the Last Glacial Maximum (LGM) ocean oxygenation to gain some insights into the major mechanisms of oxygenation changes across these different time scales. We show that the primary uncertainty on future ocean deoxygenation in the subsurface tropical oceans is in fact controlled by a robust compensation between decreasing oxygen saturation (O2sat) due to warming and decreasing apparent oxygen utilization (AOU) due to increased ventilation of the corresponding water masses. Modelled short-term natural variability in subsurface oxygen levels also reveals a compensation between O2sat and AOU, controlled by the latter. Finally, using a model simulation of the LGM, reproducing data-based reconstructions of past ocean (de)oxygenation, we show that the deoxygenation trend of the subsurface ocean during deglaciation was controlled by a combination of warming-induced decreasing O2sat and increasing AOU driven by a reduced ventilation of tropical subsurface waters.This article is part of the themed issue 'Ocean ventilation and deoxygenation in a warming world'. © 2017 The Author(s).
V. Moca
2006-10-01
Full Text Available In the pedo-climatic conditions of Suceava County that extends on a total surface of 855 300 ha, the balance of agricultural land affected by humidity excess with temporar or permanent character is differenciated from south to north and from east to west, between 30 % till 40%, which means almost 100 000 ha. On these soils with underground water or pluvial excess hydro ameliorative drainage systems have been installed, associated to a complex agroameliorative works. For long effect estimation of the underground drainage asociated with the agropedoameliorative works upon the some physical and hydrophysical characteristics, there were analyzed the soil and the environment conditions from Baia field. For this reason, we analyzed the agrophysical conditions for luvisol albic pseudogleic (SRCS-1980, respectively luvosol albic stagnic-glosic (SRTS-2003 albic luvosoil drained and cultivated, after a period of 28 years (1978-2006 use. The obtained data regarding to te water balance and the evolution of the major physical properties of soil, under the influence of drainage and amelioration works, put into evidence in the first stage (1978-1986 a general improvement of the aerohidrycal state and physical-chemical conditioning. In the next two experimental cycles of 10 years each, have been noticed a increased of compaction degree of soil drained and cultivated on 0-30 cm depth, from weak loose to moderately compaction depending on the remanence of the reclamation technologies.
夏天; 孔繁超
2008-01-01
This paper proposes some regularity conditions.On the basis of the proposed regularity conditions,we show the strong consistency of maximum quasi-likelihood estimation (MQLE)in quasi-likelihood nonlinear models (QLNM).Our results may he regarded as a further generalization of the relevant results in Ref.[4].
李红; 雷志勇
2011-01-01
提出了最大熵谱估计和LMS自适应算法提取激光测距系统的反射回波信号。研究了最大熵谱估计的信号检测原理,采用Burg算法求取AR模型相关参数,设计LMS自适应滤波器提取回波信号,并分析了Burg最大熵谱估计在激光测距系统回波信号检测中的应用。仿真分析表明,最大熵谱估计和LMS自适应算法相结合可以有效地从背景噪声中提取有用的激光反射回波信号。%Maximum entropy spectral estimation and LMS adaptive algorithm were proposed to extract the reflection echo signal of laser ranging system.The theory of the maximum entropy spectra estimation was researched,the Burg algorithm was used to obtain correlative parameters of model,and LMS filter was designed to extract useful signal from weak echo signal.The applications of Burg maximum entropy spectral estimation were analyzed in echo signal detection of laser ranging system.Simluation results show that the maximum entropy spectral estimation and LMS adaptive algorithm can extract echo signal of laser ranging system effectively from background noises.
Subsurface connection methods for subsurface heaters
Vinegar, Harold J. (Bellaire, TX); Bass, Ronald Marshall (Houston, TX); Kim, Dong Sub (Sugar Land, TX); Mason, Stanley Leroy (Allen, TX); Stegemeier, George Leo (Houston, TX); Keltner, Thomas Joseph (Spring, TX); Carl, Jr., Frederick Gordon (Houston, TX)
2010-12-28
A system for heating a subsurface formation is described. The system includes a first elongated heater in a first opening in the formation. The first elongated heater includes an exposed metal section in a portion of the first opening. The portion is below a layer of the formation to be heated. The exposed metal section is exposed to the formation. A second elongated heater is in a second opening in the formation. The second opening connects to the first opening at or near the portion of the first opening below the layer to be heated. At least a portion of an exposed metal section of the second elongated heater is electrically coupled to at least a portion of the exposed metal section of the first elongated heater in the portion of the first opening below the layer to be heated.
SUBSURFACE EMPLACEMENT TRANSPORTATION SYSTEM
T. Wilson; R. Novotny
1999-11-22
The objective of this analysis is to identify issues and criteria that apply to the design of the Subsurface Emplacement Transportation System (SET). The SET consists of the track used by the waste package handling equipment, the conductors and related equipment used to supply electrical power to that equipment, and the instrumentation and controls used to monitor and operate those track and power supply systems. Major considerations of this analysis include: (1) Operational life of the SET; (2) Geometric constraints on the track layout; (3) Operating loads on the track; (4) Environmentally induced loads on the track; (5) Power supply (electrification) requirements; and (6) Instrumentation and control requirements. This analysis will provide the basis for development of the system description document (SDD) for the SET. This analysis also defines the interfaces that need to be considered in the design of the SET. These interfaces include, but are not limited to, the following: (1) Waste handling building; (2) Monitored Geologic Repository (MGR) surface site layout; (3) Waste Emplacement System (WES); (4) Waste Retrieval System (WRS); (5) Ground Control System (GCS); (6) Ex-Container System (XCS); (7) Subsurface Electrical Distribution System (SED); (8) MGR Operations Monitoring and Control System (OMC); (9) Subsurface Facility System (SFS); (10) Subsurface Fire Protection System (SFR); (11) Performance Confirmation Emplacement Drift Monitoring System (PCM); and (12) Backfill Emplacement System (BES).
Zhao, W.; Cella, M.; Pasqua, O. Della; Burger, D.M.; Jacqz-Aigrain, E.
2012-01-01
WHAT IS ALREADY KNOWN ABOUT THIS SUBJECT: Abacavir is used to treat HIV infection in both adults and children. The recommended paediatric dose is 8 mg kg(-1) twice daily up to a maximum of 300 mg twice daily. Weight was identified as the central covariate influencing pharmacokinetics of abacavir in
Wave-Based Subsurface Guide Star
Lehman, S K
2011-07-26
Astronomical or optical guide stars are either natural or artificial point sources located above the Earth's atmosphere. When imaged from ground-based telescopes, they are distorted by atmospheric effects. Knowing the guide star is a point source, the atmospheric distortions may be estimated and, deconvolved or mitigated in subsequent imagery. Extending the guide star concept to wave-based measurement systems to include acoustic, seismo-acoustic, ultrasonic, and radar, a strong artificial scatterer (either acoustic or electromagnetic) may be buried or inserted, or a pre-existing or natural sub-surface point scatterer may be identified, imaged, and used as a guide star to determine properties of the sub-surface volume. That is, a data collection is performed on the guide star and the sub-surface environment reconstructed or imaged using an optimizer assuming the guide star is a point scatterer. The optimization parameters are the transceiver height and bulk sub-surface background refractive index. Once identified, the refractive index may be used in subsequent reconstructions of sub-surface measurements. The wave-base guide star description presented in this document is for a multimonostatic ground penetrating radar (GPR) but is applicable to acoustic, seismo-acoustic, and ultrasonic measurement systems operating in multimonostatic, multistatic, multibistatic, etc., modes.
SOLID OXYGEN SOURCE FOR BIOREMEDIATION IN SUBSURFACE SOILS
Sodium percarbonate was encapsulated in poly(vinylidene chloride) to determine its potential as a slow-release oxygen source for biodegradation of contaminan ts in subsurface soils. In laboratory studies under aqueous conditions, the encapsulated sodium percarbonate was estimate...
US Fish and Wildlife Service, Department of the Interior — Complete estimates of waterfowl populations in each of 164 management units through the forty-eight coterminous United States were systematically developed for May...
Nielsen, Anders; Lewy, Peter
2002-01-01
A simulation study was carried out for a separable fish stock assessment model including commercial and survey catch-at-age and effort data. All catches are considered stochastic variables subject to sampling and process variations. The results showed that the Bayes estimator of spawning biomass ...
Borg, Søren; Persson, U.; Jess, T.;
2010-01-01
Hospital, Copenhagen, Denmark, during 1991 to 1993. The data were aggregated over calendar years; for each year, the number of relapses and the number of surgical operations were recorded. Our aim was to estimate Markov models for disease activity in CD and UC, in terms of relapse and remission...
武大勇; 李锋
2015-01-01
The linear semiparametric regression models with missing data were considered.The maximum empirical es-timations of the regression coefficients,and the smoothing function were obtained by the maximum empirical method. The asymptotic normality and consistency of the proposed estimations were proved under some appropriate conditions.%考虑了随机缺失数据下非线性回归模型的估计问题，利用最大经验似然估计的方法给出了回归系数、光滑函数的最大经验似然估计，并在一定条件下证明了所得估计量的渐近正态性和强相合性。
杜宁
2002-01-01
A new technique is introduced to approximate the values of the inter-polated points instead of using local quadratic interpolation. A modified charac-teristic difference scheme based on the new technique is formulated to treat a con-vection-diffusion problem of the form cu,+bux- (aux)x=f. Convergence and sta-bility of the scheme are analyzed, and an error estimate of O(△t+h2) in maxi-mum norm is presented.
张婷婷; 高金玲
2014-01-01
针对logistic回归中最大似然估计法的迭代算法求解困难的问题，从理论和实例运用的两个角度寻找到一种简便估计法，即经验logistic回归。分析结果表明，在样本容量很大的情况下经验logistic回归方法比最大似然估计方法更具备良好的科学性和实用性，并且两种方法对同一组资料的分析结果一致，而经验logistic回归更简单，此结果对于实际工作者来说非常重要。%In this paper , the empirical logistic regression method and the maximum likelihood estimation method were analyzed in detail by illustrating in theory , and the two methods were compared with correlation a-nalysis from scientific and practical .Analysis results show that , under the condition of the sample size is very big , empirical logistic regression method is better than maximum likelihood estimation method in respect of scientific and practical , at the same time , they are the same consequence .However , empirical logistic regression method is easier than maximum likelihood estimation method , which is very important to practical workers .
焦亚萌; 黄建国; 侯云山
2011-01-01
针对最大似然(maximum likelihood,ML)方位估计方法多维非线性搜索计算量大的问题,将连续空间蚁群算法与最大似然算法相结合,提出基于蚁群算法的最大似然(ant colony optimization based maximum likelihood,ACOML)估计新方法.该方法将传统蚁群算法中的信息量留存过程拓展为连续空间的信息量高斯核概率密度函数,得到最大似然方位估计的非线性全局最优解.仿真结果表明,ACOML方法保持了原最大似然方位估计方法算法的优良估计性能,而计算量只是最大似然方法的1/15.%A new maximum likelihood direction of arrival (DOA) estimator based on ant colony optimization (ACOML) is proposed to reduce the computational complexity of multi-dimensional nonlinear existing in maximum likelihood (ML) DOA estimator. By extending the pheromone remaining process in the traditional ant colony optimization into a pheromone Gaussian kernel probability distribution function in continuous space, ant colony optimization is combined with maximum likelihood method to lighten computation burden. The simulations show that ACOML provides a similar performance to that achieved by the original ML method, but its computational cost is only 1/15 of ML.
Kayode, John Stephen; Nawawi, M. N. M.; Abdullah, Khiruddin B.; Khalil, Amin E.
2017-01-01
The integration of Aeromagnetic data and remotely sensed imagery with the intents of mapping the subsurface geological structures in part of the South-western basement complex of Nigeria was developed using the PCI Geomatica Software. 2013. The data obtained from the Nigerian Geological Survey Agency; was corrected using Regional Residual Separation of the Total Magnetic field anomalies enhanced, and International Geomagnetic Reference Field removed. The principal objective of this study is, therefore, to introduce a rapid and efficient method of subsurface structural depth estimate and structural index evaluation through the incorporation of the Euler Deconvolution technique into PCI Geomatica 2013 to prospect for subsurface geological structures. The shape and depth of burial helped to define these structures from the regional aeromagnetic map. The method enabled various structural indices to be automatically delineated for an index of between 0.5 SI and 3.0 SI at a maximum depth of 1.1 km that clearly showed the best depths estimate for all the structural indices. The results delineate two major magnetic belts in the area; the first belt shows an elongated ridge-like structure trending mostly along the NorthNortheast-SouthSouthwest and the other anomalies trends primarily in the Northeast, Northwest, Northeast-Southwest parts of the study area that could be attributed to basement complex granitic intrusions from the tectonic history of the area. The majority of the second structures showed various linear structures different from the first structure. Basically, a significant offset was delineated at the core segment of the study area, suggesting a major subsurface geological feature that controls mineralisation in this area.
Milinkovitch Michel C
2010-07-01
Full Text Available Abstract Background The development, in the last decade, of stochastic heuristics implemented in robust application softwares has made large phylogeny inference a key step in most comparative studies involving molecular sequences. Still, the choice of a phylogeny inference software is often dictated by a combination of parameters not related to the raw performance of the implemented algorithm(s but rather by practical issues such as ergonomics and/or the availability of specific functionalities. Results Here, we present MetaPIGA v2.0, a robust implementation of several stochastic heuristics for large phylogeny inference (under maximum likelihood, including a Simulated Annealing algorithm, a classical Genetic Algorithm, and the Metapopulation Genetic Algorithm (metaGA together with complex substitution models, discrete Gamma rate heterogeneity, and the possibility to partition data. MetaPIGA v2.0 also implements the Likelihood Ratio Test, the Akaike Information Criterion, and the Bayesian Information Criterion for automated selection of substitution models that best fit the data. Heuristics and substitution models are highly customizable through manual batch files and command line processing. However, MetaPIGA v2.0 also offers an extensive graphical user interface for parameters setting, generating and running batch files, following run progress, and manipulating result trees. MetaPIGA v2.0 uses standard formats for data sets and trees, is platform independent, runs in 32 and 64-bits systems, and takes advantage of multiprocessor and multicore computers. Conclusions The metaGA resolves the major problem inherent to classical Genetic Algorithms by maintaining high inter-population variation even under strong intra-population selection. Implementation of the metaGA together with additional stochastic heuristics into a single software will allow rigorous optimization of each heuristic as well as a meaningful comparison of performances among these
Konstandinos G. Raptis
2012-01-01
Full Text Available Purpose of this study is the consideration of loading and contact problems encountered at rotating machine elements and especially at toothed gears. The later are some of the most commonly used mechanical components for rotary motion and power transmission. This fact proves the necessity for improved reliability and enhanced service life, which require precise and clear knowledge of the stress field at gear tooth. This study investigates the maximum allowable stresses occurring during spur gear tooth meshing computed using Niemannâs formulas at Highest Point of Single Tooth Contact (HPSTC. Gear material, module, power rating and number of teeth are considered as variable parameters. Furthermore, the maximum allowable stresses for maximum power transmission conditions are considered keeping the other parameters constant. After the application of Niemannâs formulas to both loading cases, the derived results are compared to the respective estimations of Finite Element Method (FEM using ANSYS software. Comparison of the results derived from Niemannâs formulas and FEM show that deviations between the two methods are kept at low level for both loading cases independently of the applied power (either random or maximum and the respective tangential load.
L. Ocola
2008-01-01
Full Text Available Post-disaster reconstruction management of urban areas requires timely information on the ground response microzonation to strong levels of ground shaking to minimize the rebuilt-environment vulnerability to future earthquakes. In this paper, a procedure is proposed to quantitatively estimate the severity of ground response in terms of peak ground acceleration, that is computed from macroseismic rating data, soil properties (acoustic impedance and predominant frequency of shear waves at a site. The basic mathematical relationships are derived from properties of wave propagation in a homogeneous and isotropic media. We define a Macroseismic Intensity Scale I_{MS} as the logarithm of the quantity of seismic energy that flows through a unit area normal to the direction of wave propagation in unit time. The derived constants that relate the I_{MS} scale and peak acceleration agree well with coefficients derived from a linear regression between MSK macroseismic rating and peak ground acceleration for historical earthquakes recorded at a strong motion station, at IGP's former headquarters, since 1954. The procedure was applied to 3-October-1974 Lima macroseismic intensity data at places where there was geotechnical data and predominant ground frequency information. The observed and computed peak acceleration values, at nearby sites, agree well.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Lee, H.; Haimson, B.
2007-12-01
drillhole wall conditions is drastically different from that conventionally expected, but is compatible with breakout formation mechanism in granite (Haimson, Int. J. Rock Mech., 2007). All the 'unjacketed' true triaxial strength data can be fitted by a simple function in the octahedral shear stress versus octahedral normal stress domain, yielding a Nadai-type true triaxial strength criterion. The criterion can be used in conjunction with breakouts that have been located within the cored zone to yield the maximum horizontal in situ stress σH when the other two principal stress are known. Assuming that the state of stress at breakout-drillhole intersections (located for example by BHTV logging) is sufficient to bring about brittle failure (Vernik and Zoback, 1992), one can substitute the known principal stresses there (obtained from the Kirsch solution) for the corresponding values in the criterion. The in situ σv is given by the overburden density, σh is typically obtained from hydrofrac shut-in pressures, breakout width is extracted from BHTV logs, borehole fluid pressure is a function of its density, and the Poisson's ratio is obtained from mechanical lab testing. The only unknown, σH, is thus readily computed. An actual computation was not carried out because data on hydrofrac pressures and breakout dimensions were not available at the time of this submission.
用遗传算法 实现CMOS时序电路最大功耗估计%Maximum Power Estimation for CMOS Sequ ential Circuits by Genetic Algorithm
卢君明; 林争辉
2001-01-01
Estimation of maximum power dissipation is importa nt indesigning highly reliable VLSI systems. However， maximum power estimation for CMOS circuits is essentially a combination optimization pro blem， which has exponential complexity in the worst case. For large-scaled sequential circuits， due to the fact that the sequential rela tionship between the Primary Inputs and States must be considered， it is more CPU time intensive to exhaustively search for the optimal input patterns to induce maximum power. In this paper， a novel approach is proposed to obtain a lower bound of the maximum power consumption using Genetic Algorithm （GA）. Experiments with ISCAS-89 benchmark circuits show that our approach generates the lower bound with the qua lity that cannot be achieved using simulation-based techniques. In addition， a Monte Carlo based technique to estimate maximum pow er dissipation is realized.%最大功耗分析对于设计高可靠性的VLSI芯片是非常重要的。实际中，总是在有限的计算时间内获取一个近似最大功耗。文中用遗传算法来选择具有高功耗的输入及内部状态模型，对电路进行仿真，实现时序电路的最大功耗估算；同时，实现了基于统计的逻辑模拟最大功耗估计方法。基于ISCAS89基准时序电路的仿真表明，新方法在大规模门数时具有明显的优势，估算精度较高。而且新方法的计算时间基本上是电路逻辑门的线性关系。
肖枝洪; 朱强
2009-01-01
本文研究了截断与删失模型,运用Taylor渐近展开方法,得到模型的极大似然估计的中偏差,比渐近正态性结果更加精细.%In this paper, we study a kind of truncated and censored data. It is shown that the maximum likelihood estimator of unknown parameter θ obeys the moderate deviation under certain regular conditions by Taylor asymptotic expansion. We obtain their accurate expression of rate function.
Subsurface Ventilation System Description Document
Eric Loros
2001-07-25
The Subsurface Ventilation System supports the construction and operation of the subsurface repository by providing air for personnel and equipment and temperature control for the underground areas. Although the system is located underground, some equipment and features may be housed or located above ground. The system ventilates the underground by providing ambient air from the surface throughout the subsurface development and emplacement areas. The system provides fresh air for a safe work environment and supports potential retrieval operations by ventilating and cooling emplacement drifts. The system maintains compliance within the limits established for approved air quality standards. The system maintains separate ventilation between the development and waste emplacement areas. The system shall remove a portion of the heat generated by the waste packages during preclosure to support thermal goals. The system provides temperature control by reducing drift temperature to support potential retrieval operations. The ventilation system has the capability to ventilate selected drifts during emplacement and retrieval operations. The Subsurface Facility System is the main interface with the Subsurface Ventilation System. The location of the ducting, seals, filters, fans, emplacement doors, regulators, and electronic controls are within the envelope created by the Ground Control System in the Subsurface Facility System. The Subsurface Ventilation System also interfaces with the Subsurface Electrical System for power, the Monitored Geologic Repository Operations Monitoring and Control System to ensure proper and safe operation, the Safeguards and Security System for access to the emplacement drifts, the Subsurface Fire Protection System for fire safety, the Emplacement Drift System for repository performance, and the Backfill Emplacement and Subsurface Excavation Systems to support ventilation needs.
Subsurface Ventilation System Description Document
NONE
2000-10-12
The Subsurface Ventilation System supports the construction and operation of the subsurface repository by providing air for personnel and equipment and temperature control for the underground areas. Although the system is located underground, some equipment and features may be housed or located above ground. The system ventilates the underground by providing ambient air from the surface throughout the subsurface development and emplacement areas. The system provides fresh air for a safe work environment and supports potential retrieval operations by ventilating and cooling emplacement drifts. The system maintains compliance within the limits established for approved air quality standards. The system maintains separate ventilation between the development and waste emplacement areas. The system shall remove a portion of the heat generated by the waste packages during preclosure to support thermal goals. The system provides temperature control by reducing drift temperature to support potential retrieval operations. The ventilation system has the capability to ventilate selected drifts during emplacement and retrieval operations. The Subsurface Facility System is the main interface with the Subsurface Ventilation System. The location of the ducting, seals, filters, fans, emplacement doors, regulators, and electronic controls are within the envelope created by the Ground Control System in the Subsurface Facility System. The Subsurface Ventilation System also interfaces with the Subsurface Electrical System for power, the Monitored Geologic Repository Operations Monitoring and Control System to ensure proper and safe operation, the Safeguards and Security System for access to the emplacement drifts, the Subsurface Fire Protection System for fire safety, the Emplacement Drift System for repository performance, and the Backfill Emplacement and Subsurface Excavation Systems to support ventilation needs.
Westaway, Rob; Scotney, Philip M; Younger, Paul L; Boyce, Adrian J
2015-03-01
Stewartby works, for a time the world's largest brickworks, began operation around the start of the twentieth century and closed in 2008. Subsurface temperature measurements are available in its vicinity, obtained as part of monitoring of an adjacent landfill in one of the former quarries for the Oxford Clay, which was the raw material for brick manufacture. A striking subsurface temperature anomaly, an increment of ~12°C, was first measured in 2004, and has subsequently decayed over time. The anomaly is centred beneath one of the former brick kilns, which operated between 1935 and 1991. To investigate processes of heat absorption by the shallow subsurface, this anomaly has been modelled as a consequence of conductive heat flow into the ground due to the operation of the ~3000 m(2) kiln. This modelling indicates that a very large amount of heat energy was transported into the subsurface; we estimate the typical downward surface heat flow during operation of the kiln as ~1 W m(-2) and the energy stored in the subsurface beneath it at its time of shutdown as ~6 TJ, or ~0.03% of that released by the fuel for heating the kiln, such that the total heat energy stored beneath this multi-kiln site peaked at ~200TJ. The proportion of heat energy transported into the subsurface was relatively low due to the nature of the Oxford Clay, which has a low thermal conductivity (~0.8 W m(-1)°C(-1)) and diffusivity (~0.3mm(2)s(-1)); in a more conductive lithology it might well have been three times greater. After kiln shutdown this subsurface thermal anomaly began to dissipate by upward heat conduction and release of heat into the atmosphere; at present about half of the peak energy stored remains, decreasing at ~1% per year, the maximum temperature anomaly being currently ~7°C at a depth of ~30 m and the typical upward heat flow during this span of time having exceeded the regional ~40 mW m(-2) background by roughly an order of magnitude. We believe this to be the first
Nielsen, Chris Valentin; Martins, Paulo A. F.; Bay, Niels Oluf
2016-01-01
New equipment for testing asperity deformation at various normal loads and subsurface elongations is presented. Resulting real contact area ratios increase heavily with increasing subsurface expansion due to lowered yield pressure on the asperities when imposing subsurface normal stress parallel ...... for estimating friction in the numerical modelling of metal forming processes....
Biogenic Carbon on Mars: A Subsurface Chauvinistic Viewpoint
Onstott, T. C.; Lau, C. Y. M.; Magnabosco, C.; Harris, R.; Chen, Y.; Slater, G.; Sherwood Lollar, B.; Kieft, T. L.; van Heerden, E.; Borgonie, G.; Dong, H.
2015-12-01
A review of 150 publications on the subsurface microbiology of the continental subsurface provides ~1,400 measurements of cellular abundances down to 4,800 meter depth. These data suggest that the continental subsurface biomass is comprised of ~1016-17 grams of carbon, which is higher than the most recent estimates of ~1015 grams of carbon (1 Gt) for the marine deep biosphere. If life developed early in Martian history and Mars sustained an active hydrological cycle during its first 500 million years, then is it possible that Mars could have developed a subsurface biomass of comparable size to that of Earth? Such a biomass would comprise a much larger fraction of the total known Martian carbon budget than does the subsurface biomass on Earth. More importantly could a remnant of this subsurface biosphere survive to the present day? To determine how sustainable subsurface life could be in isolation from the surface we have been studying subsurface fracture fluids from the Precambrian Shields in South Africa and Canada. In these environments the energetically efficient and deeply rooted acetyl-CoA pathway for carbon fixation plays a central role for chemolithoautotrophic primary producers that form the base of the biomass pyramid. These primary producers appear to be sustained indefinitely by H2 generated through serpentinization and radiolytic reactions. Carbon isotope data suggest that in some subsurface locations a much larger population of secondary consumers are sustained by the primary production of biogenic CH4 from a much smaller population of methanogens. These inverted biomass and energy pyramids sustained by the cycling of CH4 could have been and could still be active on Mars. The C and H isotopic signatures of Martian CH4 remain key tools in identifying potential signatures of an extant Martian biosphere. Based upon our results to date cavity ring-down spectroscopic technologies provide an option for making these measurements on future rover missions.
Maximum likelihood estimation for social network dynamics
Snijders, T.A.B.; Koskinen, J.; Schweinberger, M.
2010-01-01
A model for network panel data is discussed, based on the assumption that the observed data are discrete observations of a continuous-time Markov process on the space of all directed graphs on a given node set, in which changes in tie variables are independent conditional on the current graph. The m
程刘胜
2015-01-01
在合理布局井下无线网络基站的基础上，提出了一种基于多载波时频迭代的最大似然TOA（Time of Arrival）估计算法，通过将小数延时不断迭代来缩小估计误差，确定合适搜索步长，实现对信号的精确TOA估计。仿真结果表明：时频迭代的最大似然TOA估计算法具有更快的收敛速度；在信噪比较小时，采用时频迭代的最大似然TOA估计算法比经典TOA估计算法有效地提高了估计精度。%The influence of underground multipath, non-line of sight and the network time synchronization accuracy cause that delayed arrival time estimation deviation is bigger in the mining UWB high accuracy position system. This paper proposes a maximum likelihood TOA estimation algorithm based on multi-carrier time-frequency iteration by rationally distributing the underground wireless base stations to conform a suitable searching step length and find the exact TOA approximation estimation to the signal via fractional delay iterated to narrow the estimation error. The result shows that the time frequency iteration TOA estimation has a faster rate of convergence than the non-iteration algorithm.
王理同
2012-01-01
在生长曲线模型中,参数矩阵的最小二乘估计为响应变量的线性函数,而极大似然估计为响应变量的非线性函数,所以极大似然估计的统计推断比较复杂.为了使它的统计推断简单点,一些学者考虑了极大似然估计与最小二乘估计的等价性.不幸的是极大似然估计与最小二乘估计的完全等价性不易满足.因此考虑它们的近似等价性,即考虑它们基于欧式范数标准下的模长之比.如果比值在任意给定的允许误差之内,就认为极大似然估计近似等价于最小二乘估计,从而简化极大似然估计的统计推断.%In a growth curve model,the generalized least squares estimator of the parameter matrix is a linear function of the response variables while its maximum likelihood estimator is nonlinear, so the statistical inference based on the maximum likelihood estimate might be more complicated. In order to make its statistical inference more easily analytical and tractable to obtain, some authors concern conditions under which the maximum likelihood estimator is completely equivalent to the generalized least squares estimator. Unfortunately, such conditions are very parsimonious. Therefore, an asymptotical equivalence between them is suggested, that is, consider the ratio of two covariance matrices concerned based on Euclidean norm. It is believed that the maximum likelihood estimator approximates the generalized least squares estimator if the ratio between them is limited to the permitted errors, and then the statistical inference of the maximum likelihood estimator is simplified.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
Sullivan, Terry [Brookhaven National Lab. (BNL), Upton, NY (United States). Biological, Environmental, and Climate Sciences Dept.
2014-12-02
ZionSolutions is in the process of decommissioning the Zion Nuclear Power Plant in order to establish a new water treatment plant. There is some residual radioactive particles from the plant which need to be brought down to levels so an individual who receives water from the new treatment plant does not receive a radioactive dose in excess of 25 mrem/y⁻¹. The objectives of this report are: (a) To present a simplified conceptual model for release from the buildings with residual subsurface structures that can be used to provide an upper bound on contaminant concentrations in the fill material; (b) Provide maximum water concentrations and the corresponding amount of mass sorbed to the solid fill material that could occur in each building for use in dose assessment calculations; (c) Estimate the maximum concentration in a well located outside of the fill material; and (d) Perform a sensitivity analysis of key parameters.
夏天; 孔繁超
2008-01-01
本文我们提出了一些正则条件,这些条件减弱了Zhu and Wei(1997)文的条件.基于所提的正则条件,我们证明了指数族非线性模型参数最大似然估计的相合性和渐近正态性.我们的结果可被认为是Zhu and Wei(1997)工作的进一步改进.%This paper proposes some regularity conditions which weaken those given by Zhu & Wei (1997).On the basis of the proposed regularity conditions,the existence,the strong consistency and the asymptotic normality of maximum likelihood estimation(MLE)are proved in exponential family nonlinear models(EFNMs).Our results may be regarded as a further improvement of the work of Zhu & Wei(1997).
房祥忠; 陈家鼎
2011-01-01
强度随时间变化的非齐次Possion过程在很多领域应用广泛.对一类非常广泛的非齐次Poisson过程—指数多项式模型,得到了当观测时间趋于无穷大时,参数的最大似然估计的“最优”收敛速度.%The model of nonhomogeneous Poisson processes with varying intensity function is applied in many fields. The best convergence rate for the maximum likelihood estimate ( MLE ) of exponential polynomial model, which is a kind of wide used nonhomogeneous Poisson processes, is given when time going to infinity.
Primary Estimation of Rare Earth Element Maximum Application Quantity in Red Soil%稀土元素在红壤上最大施用量的初步估算
褚海燕; 朱建国; 谢祖彬; 曹志洪; 李振高; 曾青
2001-01-01
The effects of rare earth element lanthanum (La) on soil microbial activities were studied through incubation experiment and rar e earth element maximum application quantity was primarily estimated in red soil . La decreased soil microbial activities and the sensitivity of microbial activi ties to La was decreased in an order of phenol decomposition＞dehydrogenase acti vity＞microbial biomass. When considering from soil microbiology, rare earth ele ment maximum application of in red soil should be below 30 mg/kg.%通过培养试验研究了稀土元素镧对红壤微生物活性 的影响并对初步估计稀土元素在红壤上最大施用量。镧降低了土壤微生物活性，微生物活性 对镧的敏感性由大到小顺序为：酚分解作用＞脱氢酶活性＞微生物生物量。从土壤微生物学 角度，稀土元素在红壤上最大施用量应小于30 mg/kg。
The Mojave vadose zone: a subsurface biosphere analogue for Mars.
Abbey, William; Salas, Everett; Bhartia, Rohit; Beegle, Luther W
2013-07-01
If life ever evolved on the surface of Mars, it is unlikely that it would still survive there today, but as Mars evolved from a wet planet to an arid one, the subsurface environment may have presented a refuge from increasingly hostile surface conditions. Since the last glacial maximum, the Mojave Desert has experienced a similar shift from a wet to a dry environment, giving us the opportunity to study here on Earth how subsurface ecosystems in an arid environment adapt to increasingly barren surface conditions. In this paper, we advocate studying the vadose zone ecosystem of the Mojave Desert as an analogue for possible subsurface biospheres on Mars. We also describe several examples of Mars-like terrain found in the Mojave region and discuss ecological insights that might be gained by a thorough examination of the vadose zone in these specific terrains. Examples described include distributary fans (deltas, alluvial fans, etc.), paleosols overlain by basaltic lava flows, and evaporite deposits.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Subsurface Geotechnical Parameters Report
D. Rigby; M. Mrugala; G. Shideler; T. Davidsavor; J. Leem; D. Buesch; Y. Sun; D. Potyondy; M. Christianson
2003-12-17
The Yucca Mountain Project is entering a the license application (LA) stage in its mission to develop the nation's first underground nuclear waste repository. After a number of years of gathering data related to site characterization, including activities ranging from laboratory and site investigations, to numerical modeling of processes associated with conditions to be encountered in the future repository, the Project is realigning its activities towards the License Application preparation. At the current stage, the major efforts are directed at translating the results of scientific investigations into sets of data needed to support the design, and to fulfill the licensing requirements and the repository design activities. This document addresses the program need to address specific technical questions so that an assessment can be made about the suitability and adequacy of data to license and construct a repository at the Yucca Mountain Site. In July 2002, the U.S. Nuclear Regulatory Commission (NRC) published an Integrated Issue Resolution Status Report (NRC 2002). Included in this report were the Repository Design and Thermal-Mechanical Effects (RDTME) Key Technical Issues (KTI). Geotechnical agreements were formulated to resolve a number of KTI subissues, in particular, RDTME KTIs 3.04, 3.05, 3.07, and 3.19 relate to the physical, thermal and mechanical properties of the host rock (NRC 2002, pp. 2.1.1-28, 2.1.7-10 to 2.1.7-21, A-17, A-18, and A-20). The purpose of the Subsurface Geotechnical Parameters Report is to present an accounting of current geotechnical information that will help resolve KTI subissues and some other project needs. The report analyzes and summarizes available qualified geotechnical data. It evaluates the sufficiency and quality of existing data to support engineering design and performance assessment. In addition, the corroborative data obtained from tests performed by a number of research organizations is presented to reinforce